Streaming
Presenting AI response to your user in real-time.
Streaming enables you to show users chunks of response text as they arrive rather than waiting for the full response. You can offer a real-time Agent conversation experience.

Agent
To stream the AI response you should use the stream() method to run the agent, instead of chat(). This method return a PHP generator that can be used to process the response as an iterable object.
use App\Neuron\MyAgent;
use NeuronAI\Chat\Messages\UserMessage;
$stream = MyAgent::make()->stream(
new UserMessage('How are you?')
);
// Print the response chunk-by-chunk in real-time
foreach ($stream as $chunk) {
echo $chunk->content;
}
// I'm fine, thank you! How can I assist you today?Streaming chunks
When you process the streamed response of the agent you can expect to receive three types of chunk object:
TextChunk: represents a piece of textReasoningChunk: contains chunks of the reasoning summary of the model (only available for reasoning models)ToolCallChunk: represents the LLM asking for a tool executionToolResultChunk: contains the results of tools execution after a tool call
These objects are a layer of abstraction between the underlying messages flow inside the agent to perform a task and the data needed on the client side to stay informed on what's going on behind the scenes.
The stream composition depends by your agent implementation. If the agent has no tools attached there is no chance to receive a ToolCallChunk or ToolResultChunk instance, so you can iterate the output stream expecting only TextChunk objects.
Streaming & Tools
Neuron support Tools & Function calls in combination with the streaming response. You are free to provide your Agents with Tools and they will be automatically handled in the middle of the stream, to continue toward the final response.
When the agent receive a tool call request from the LLM, it will stream two types of chunk: ToolCallChunk, ToolResultChunk.
These classes will contain the list of tool behind called by the LLM so you can expose informative output to the client about what the agent is doind to answer the user prompt.
Here is an example of how you can deal with this scenario:
use App\Neuron\MyAgent;
use NeuronAI\Chat\Messages\UserMessage;
use NeuronAI\Tools\Tool;
$stream = MyAgent::make()
->addTool(
Tool::make(
'get_server_configuration',
'retrieve the server network configuration'
)->addProperty(...)->setCallable(...)
)
->stream(
new UserMessage("What's the IP address of the server?")
);
// Iterate chunks
foreach ($stream as $chunk) {
if ($chunk instanceof ToolCallChunk) {
// Output the ongoing tool call
echo "\n".\array_reduce($chunk->getTools(),
fn(string $carry, ToolInterface $tool)
=> $carry .= '- Calling tool: '.$tool->getName()."\n",
'');
continue;
}
if ($chunk instanceof ToolResultChunk) {
echo "- Tools execution completed\n":
continue;
}
// Handle TextChunk and ReasoningChunk
echo $chunk->content;
}
// Let me retrieve the server configuration.
// - Calling tool: get_server_configuration
// The IP address of the server is: 192.168.0.10Get The Final Result
When the model finishes streaming output you can retrieve the final Neuron Message instance from the Generator:
$stream = MyAgent::make()->stream(...);
// Iterate chunks
foreach ($stream as $chunk) {
// ...
}
$message = $stream->getReturn(); // Get the message instance
echo $message->getContent();Monitoring & Debugging
To watch inside this workflow you should connect your Agent to the Inspector monitoring dashboard in order to see the tool call execution flow in real-time.
After you sign up at the link above, make sure to set the INSPECTOR_INGESTION_KEY variable in the application environment file to start monitoring:
INSPECTOR_INGESTION_KEY=nwse877auxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Adapters
Neuron's Stream Adapter system provides a flexible, protocol-agnostic way to transform AI response streams into different client-side formats.
Stream adapters act as translators between Neuron's internal streaming events (text chunks, tool calls, reasoning steps) and specific frontend protocols like Vercel AI SDK or AG-UI.
This architecture allows you to seamlessly integrate Neuron agents with various frontend frameworks without modifying your core agent logic. Adapters handle protocol-specific concerns such as message lifecycle events, event formatting, and ID tracking, while maintaining consistent streaming behavior across all providers (Anthropic, OpenAI, Gemini, etc.). The system is highly extensible—you can create custom adapters by extending SSEAdapter and implementing protocol-specific event transformations, enabling integration with any streaming protocol that supports SSE. All stream chunks inherit from the StreamChunk base class, providing type safety and consistent structure with automatic ID and message tracking across the streaming lifecycle.

Agent has a special method streamWithAdapter that accept an instance of a StreamAdapterInterface so you are free to provide any concrete implementation based on your integration needs.
Vercel AI SDK Adapter
Adapter for Vercel AI SDK Data Stream Protocol: https://ai-sdk.dev/docs/ai-sdk-ui/stream-protocol
use NeuronAI\Chat\Stream\Adapters\VercelAIAdapter;
// Create agent with tools
$agent = Agent::make()
->setAiProvider(...)
->addTool(CalculatorToolkit::make());
// Initialize the streaming with the Vercel adapter
$stream = $agent->streamWithAdapter(
new VercelAIAdapter(),
new UserMessage('What is the square root of 144?')
);
// Process the response
foreach ($stream as $line) {
echo $line;
\flush();
}AG-UI Adapter
Implements the streaming event-based protocol defined by AG-UI protocol for real-time agent-frontend interaction. Supports text messages, tool calls, reasoning, and lifecycle events.
For more information see: https://docs.ag-ui.com/concepts/events
use NeuronAI\Chat\Stream\Adapters\AGUIAdapter;
// Create agent with tools
$agent = Agent::make()
->setAiProvider(...)
->addTool(CalculatorToolkit::make());
// Initialize the streaming with the AG-UI adapter
$stream = $agent->streamWithAdapter(
new AGUIAdapter(),
new UserMessage('What is the square root of 144?')
);
// Process the response
foreach ($stream as $line) {
echo $line;
\flush();
}Last updated