Tools & Function Calls
Give Agents the ability to interact with your application context and services.
Last updated
Give Agents the ability to interact with your application context and services.
Last updated
Tools enable Agents to go beyond generating text by facilitating interaction with your application services, or external APIs.
Think about Tools as special functions that your AI agent can use when it needs to perform specific tasks. They let you extend your Agent's capabilities by giving it access to specific functions it can call inside your code.
In the example we can define a tool to make the Agent able to retrieve the YouTube video transcription:
Let’s break down the code.
We introduced the new method tools()
into the Agent. This method expects to return an array of Tool objects that the AI will be able to use if needed.
In this example we return an array of just one tool, named get_transcription
.
Notice that the ToolProperty
we define should match with the signature of the function you use as a callable. The callable gets the $video_url
arguments, and the name of the property is exactly "video_url".
The most important thing are the names and descriptions you give to the tool and its properties. All these pieces of information will be passed to the LLM in natural language. The more explicit and clear you are, the more likely the LLM understands when, and if, it’s the case to use these tools.
Once the Agent decides to use this tool the callable function is executed. Here we can implement the logic to gather the article from the database and return the information we want to the LLM.
Neuron provides you with these clear and simple APIs and automates the rest of interactions with the LLM under the hood. Once you get the point you can immediately open to a possibility to connect basically everything you want to the Agent. Being able to execute local functions allows you to invoke any external APIs or application components.
Every time I implement an agent I prefer to separate the Tool definition in the agent class, from the implementation of the callable function.
Since it requires connecting to external APIs it’s better to implement this process into a dedicated class so it will be easier to organize the code or add new features later.Instead of defining the in-line function we can create a class implementing the PHP magic method __invoke()
and pass to the tool as it will behave like a function:
Notice how the __invoke()
method accepts the same arguments expected by the tool callable function.
Now we can pass an instance of this class in the Tool definition:
Transcriptions are just an example. You can eventually implement other tools to make the Agent able to retrieve other video metadata to enhance its video analysis capabilities.
Ok! Now we provided the agent with a tool to make it able to retrieve the YouTube video transcript when it thinks it's needed to complete the task.
Since we encapsulated this feature into the agent the interaction it's really simple:
When you provide the Agent with tools, Neuron sends their information to the LLM along with the user message.
Once the LLM reads the prompt and the information of the tools attached, it will decide if some tool can help to gather additional information to respond to the user prompt. In that case it will return a special response that contains the tools the LLM wants to call.
Neuron automatically manages this response for you, executing the callable of the tools the LLM decided to call, and return the result back to the LLM to get its final response.
In the second image below you can see all the details about the execution of the function to retrieve the content of the article:
Thanks to the Neuron AI modular architecture, Tools are just a component of the toolkit that rely on the ToolInterface
interface. So you are free to create pre-packaged tool classes that implements common functionalities, and release them as external composer packages or submit a PR to our repository to have them integrated into the core toolkit.
Once you identify a common use case, pre-packaged Tools can internally define the execute method, the properties needed by the callback, description, etc.
To create a new Tool you must implement the following interface. Also take a look at the default Tool class to get some inspiration.
The Agent implementation could be simplified as below:
To retrieve the YouTube video transcription I used a very useful API called .
To watch inside this workflow you should to the Inspector monitoring dashboard in order to see this call execution flow in real time.