Understanding the Latest Updates to Flutter's GenUI and A2UI Protocol

Generative UI (GenUI) is a user experience pattern where an AI agent not only generates content but also decides how to display and interact with it. In Flutter, this is implemented using the open A2UI protocol, which enables collaboration between agents and renderers. The Flutter team's genui package connects agents with a widget catalog using A2UI. Recently, both the package and protocol received a major update, shifting from a "Structured Output First" to a "Prompt First" philosophy, decoupling the architecture, and removing the ContentGenerator. This rewrite explores the changes through a Q&A format, covering everything from the new architecture to migration steps.

What is Generative UI (GenUI) and how does it work in Flutter?

Generative UI, or GenUI, is a pattern where an AI agent generates not just content but also the user interface structure and interactivity. In Flutter, this is achieved using the open A2UI protocol, which defines how agents and renderers (clients) collaborate on UI composition and state management. The Flutter team developed the genui package, which uses A2UI to connect with an agent and provide a catalog of available widgets. The agent then selects and arranges these widgets dynamically based on user input or context. This allows for adaptive interfaces that can change in real time without manual updates. The latest update enhances this process by giving developers more control over how the agent communicates with the app and how responses are parsed.

Understanding the Latest Updates to Flutter's GenUI and A2UI Protocol

What are the key changes in the latest update to the genui package and A2UI protocol?

The most significant change is the shift from a "Structured Output First" approach to a "Prompt First" approach. In the old version, A2UI messages were streamed through structured output APIs (e.g., JSON schemas). Now, agents include blocks of JSON as text within their responses, making the process more flexible and aligned with how large language models (LLMs) naturally generate output. Additionally, the architecture has been decoupled: the ContentGenerator class, which previously handled prompt construction, network calls, and parsing, has been removed entirely. This allows developers to directly manage LLM connections, chat history, retry logic, and error handling without going through a rigid framework API. Provider-specific wrapper packages (like genui_dartantic and genui_google_generative_ai) are no longer needed.

How has the architecture of genui changed with the removal of ContentGenerator?

Previously, the ContentGenerator class encapsulated all interaction with the agent, hiding details of prompt building, network calls, and response parsing. Developers passed a ContentGenerator to the SurfaceController, which managed the UI. In the new version, ContentGenerator is gone. Instead, the framework is split into three distinct layers: the SurfaceController (now called the Engine) manages UI state and rendering; the A2uiTransportAdapter (Transport) streams messages between agent and renderer; and the Conversation facade provides a high-level API for chat state management. This decoupling gives you full control over the agent connection. You can set up any LLM provider, tweak generation settings, add custom functions, and implement your own retry and error handling logic without framework interference.

What does the "Prompt First" approach mean for developers?

The "Prompt First" approach means that instead of requiring the agent to output JSON in a strictly defined schema (as in structured output APIs), the agent now includes blocks of JSON as plain text within its natural language responses. The framework then parses those JSON blocks to extract UI instructions. This is more natural for LLMs, which are typically fine-tuned to generate text. For developers, this simplifies integration: you no longer need to enforce a specific output schema on the agent side. Instead, you can just instruct the agent to output a JSON block within its prompt, and the genui package will handle extraction. This change reduces complexity and makes the system more compatible with a wider range of models and APIs.

What are the new layers in the decoupled architecture?

The decoupled architecture introduces three key layers:

This separation allows developers to replace or customize each layer independently. For example, you can swap out the transport layer without affecting the UI engine, or modify conversation logic without touching rendering.

How should developers migrate from v0.7.0 to v0.9.0?

Migration from v0.7.0 to v0.9.0 requires several steps. First, remove any dependencies on provider-specific wrapper packages like genui_dartantic or genui_google_generative_ai, as they are no longer supported. Second, update your code to remove references to ContentGenerator. Instead, you need to set up a direct connection to your LLM agent and use the new A2uiTransportAdapter to pass messages. You'll create a custom class that implements the transport adapter interface. Then, replace the old SurfaceController initialization (which took a ContentGenerator) with the new engine, passing the transport adapter and a conversation facade. Finally, rewrite your chat loop logic to handle message streaming and error handling yourself, using the provided high-level API or building your own. The official migration guide from Flutter offers detailed code examples for each step.

What happens to provider-specific wrapper packages?

Provider-specific wrapper packages (e.g., genui_dartantic, genui_google_generative_ai, genui_firebase_ai) are no longer needed in v0.9.0. They existed to encapsulate the specific API of a cloud provider's LLM service within a ContentGenerator. Since ContentGenerator has been removed, these packages have been deprecated and removed from the dependency tree. Developers are now expected to connect directly to their chosen LLM provider using standard HTTP or SDK calls, and then implement the A2uiTransportAdapter to interface with the genui framework. This gives you complete freedom to use any model or provider, tweak generation parameters, and add custom functions without being tied to a framework-specific wrapper. It also reduces the overall package size and potential compatibility issues.

How does the new TransportAdapter work in practice?

The A2uiTransportAdapter is an interface that you implement to define how messages are exchanged between your agent and the genui renderer. Typically, you create a class that extends or implements this adapter. Your adapter's responsibilities include sending user messages to the agent, receiving response chunks (which may contain JSON blocks), and parsing them. For example, you might use the http package to call an OpenAI-compatible API, stream the response, and extract JSON blocks from the text. You then pass these parsed instructions to the SurfaceController (engine) via callbacks. The adapter also handles errors, retries, and timeout settings. Because the framework no longer manages the transport layer, you have full control over the connection lifecycle. This makes it easy to integrate with any LLM service, from cloud providers to locally hosted models.

Tags:

Recommended

Discover More

10 Key Insights: How Kubernetes Became the Backbone of AIA New Standard Folder Arrives in Linux Home Directories: Meet 'Projects'5 Alarming Apple-Related Crimes: The Stories Behind the HeadlinesKubernetes v1.36 Memory QoS: Tiered Protection and Better ControlFacebook Debuts AI-Powered Search Overhaul for Groups to Combat Information Overload