Observability
Trace your AI agents implementations to detect errors and performance bottlenecks in real-time.
Last updated
Trace your AI agents implementations to detect errors and performance bottlenecks in real-time.
Last updated
Many of the Agents you build with Neuron AI will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your agents. The best way to do this is with Inspector.
The Inspector team designed Neuron AI with built-in observability features, so you can monitor AI agents were running, allowing you to get meaningful insights from inside of your agents to help you move from prototype to production with confidence.
You have to install the Inspector package based on your development environment. We provide integration packages for PHP, Laravel, Symfony, CodeIgniter. Check out our GitHub organizations.
Neuron AI integrates seamlessly with Inspector just installing the appropriate package. When your agent are being executed you will see the details of their internal steps on the Inspector dashboard.
To create an Ingestion key head to the Inspector dashboard and create a new app.
For any additional support drop in a live chat in the the dashboard. We are happy to listen from your experience, find new possible improvements, and make the tool better overtime.
To activate monitoring you need to pass the Inspector instance or create a new one if it's not already integrated into your app.
For further support don't esitate to contact us via live chat, or at [email protected]
The Inspector Laravel package already contains an helper function to access the Inspector instance, so you can pass it to the agent:
If use the Inspector Symfony bundle you can get the current Inspector instance from the container to pass it to the agent.
You can load the inspector service with the helper
function, and call the service to be injected into the agent: