The Future of User Interfaces Starts Now
At Assemblysoft, we’ve always been passionate about combining innovation with practicality. Today, we’re stepping beyond traditional chatbots and introducing a game-changing concept: enabling users to interact with Blazor applications using natural language—and even their voice—to manipulate user interface components in real-time.
But before you can envision this in action, you need to understand what makes large language models (LLMs) truly powerful—and why we’ve integrated them into our Blazor and AI solutions.
What This Means for Business Owners
You don’t need to be a developer to appreciate the impact of this technology. Here’s what natural language interfaces mean for your business:
1. Faster User Adoption
Employees can interact with your systems using plain language—whether typed or spoken. This reduces training time and speeds up onboarding, particularly for users less familiar with software tools.
2. Improved Productivity
By removing the need to click through multiple menus or memorize complex workflows, users get straight to what they need. This saves time, reduces errors, and helps teams focus on outcomes rather than interfaces.
3. Accessibility by Default
Voice input and natural language commands allow more people to engage with your systems—especially those who rely on assistive technology. It’s a modern, inclusive approach that aligns with digital accessibility standards.
4. Global-Ready
Because AI understands multiple languages, your platform becomes instantly adaptable to users across different regions and backgrounds. This supports international teams and future-proofs your solution for global expansion.
5. Smarter Customer Service
Imagine your internal support tools or CRM being operated by voice or simple instructions—no menus, no complexity. Customer service teams get faster access to data, leading to quicker resolution times and better experiences.
6. AI Without the Hype
Assemblysoft takes proven Microsoft-backed technologies and implements them into your business applications responsibly. We don’t promise magic—we deliver results through expert engineering and real-world testing.
Beyond Chat: What Can a Large Language Model Really Do?
Chatbots are often seen as the pinnacle of LLM capabilities—but that’s just scratching the surface. At Assemblysoft, we leverage LLMs not just for conversation, but to translate natural language into structured instructions, such as generating or modifying code, JSON objects, or UI states.
These models are trained on billions of tokens of text, enabling them to understand context, semantics, and even intent. Whether it's translating English to German or reconfiguring a UI component with a simple spoken instruction, LLMs can reason across language and structure.

From Text to UI: Real-Time State Manipulation
In a Blazor application, many components expose a state object—typically serialized as JSON. Instead of requiring users to navigate complex menus or press multiple buttons, we allow them to say something like:
“Show me customers from page five.”
The LLM translates this into a structured JSON response:
jsonCopyEdit{ "page": 5}
Our application then applies this JSON to update the UI—no clicks required.
This isn’t just a gimmick. It's a profound shift in how users interact with data-driven components.
Natural Language Interfaces in Blazor
With Blazor, we can harness these AI-powered interactions directly in a browser-based .NET app. Here’s what we do at Assemblysoft to make this seamless:
Embed an AI-powered service using Microsoft.Extensions.AI
Feed user input (voice or text) into the AI pipeline
Translate that input into structured grid state updates
Apply the updated state via Blazor’s component APIs
We've implemented this with full support for paging, filtering, sorting, and grouping—turning a data grid into an AI-responsive interface that adapts to how the user thinks, not how they click.
Speech-to-UI: Accessible, Inclusive, and Smarter
To make things even more intuitive, we’ve added speech-to-text capabilities using browser-native APIs and open-source tools like Blazorators. This allows users to speak commands like:
“Sort by city in ascending order.”
The spoken words are transcribed, sent to the AI, and applied to the UI. No mouse, no keyboard—just voice.
This also provides immense accessibility benefits, making software more usable for those who rely on assistive technologies or have difficulty navigating traditional UI paradigms.
Secure, Scalable, and Future-Proof
Everything we’ve built integrates securely with Azure OpenAI through Microsoft's Extensions.AI library. This gives our clients:
A plug-and-play AI abstraction for LLMs
Full compatibility with .NET 10 and beyond
Easy migration paths to multi-modal models in future updates
As .NET keeps evolving, we’re already preparing our solutions to support multi-lingual speech input natively—no extra infrastructure needed.
Why Choose Assemblysoft?
At Assemblysoft, our UK-based development team excels in bringing together cutting-edge technology and real business outcomes. We help modernize .NET applications, build secure APIs, and create intelligent, intuitive frontends using Blazor—all while ensuring your investment is future-ready.
Whether you’re:
Looking to integrate AI into your UI,
Want to simplify complex interactions,
Or aiming to enhance accessibility across your software stack...
We can help you lead the way.

Let’s Talk About Your Next AI-Driven .NET Project
Ready to explore how natural language and AI can elevate your applications?
📞 Contact Assemblysoft
💻 Learn more about Fullstack Blazor Development
🚀 Explore our Custom Software Development Services
📚 Dive into the future with Microsoft.Extensions.AI