Evaluation

Evaluating the output of your agentic system

This guide covers approaches to evaluating agents. Effective evaluation is essential for measuring agent performance, tracking improvements, and ensuring your agents meet quality standards.

When building AI agents, evaluating the consistency of their output is crucial, not only for the maintenance of the agent, but also to evaluate different architectures or prompting approach on the initial design phase.

It's important to consider various qualitative and quantitative factors, including response syntax, task completion, success, and inaccuracies or hallucinations. In evaluations, it's also important to consider comparing different agent configurations to optimize for specific desired outcomes. Given the dynamic and non-deterministic nature of LLMs, it's also important to have rigorous and frequent evaluations to ensure a consistent baseline for tracking improvements or regressions.

Configuring your application

Like unit tests, it could be better to collect evaluators for your AI system into a dedicated directory. So, you can add the configuration below to your application composer.json file in order to tell composer how to include your evaluators in the application namespaces:

"autoload-dev": {
    "psr-4": {
        ...,
        "App\\Evaluators\\": "evaluators/"
    }
},

Next create the evaluators directory in your project root folder. Keeping evaluation code separate from production code creates a clear boundary between what gets deployed to production and what exists purely for development and quality assurance.

Creating Evaluators

Use the command below to create the AgentEvaluator class into the evaluators folder:

vendor/bin/neuron make:evaluator App\\Neuron\\Evaluators\\AgentEvaluator

The class being created will have the following structure:

namespace App\Neuron\Evaluators;

use NeuronAI\Evaluation\Assertions\StringContains;
use NeuronAI\Evaluation\BaseEvaluator;
use NeuronAI\Evaluation\Contracts\DatasetInterface;
use NeuronAI\Evaluation\Dataset\JsonDataset;

class AgentEvaluator extends BaseEvaluator
{
    /**
     * 1. Get the dataset to evaluate against
     */
    public function getDataset(): DatasetInterface
    {
        return new JsonDataset(__DIR__ . '/datasets/dataset.json');
    }

    /**
     * 2. Run the agent logic being tested
     */
    public function run(array $datasetItem): mixed
    {
        $response = MyAgent::make()->chat(
            new UserMessage($datasetItem['input'])
        )->getMessage();
        
        return $response->getContent();
    }

    /**
     * 3. Evaluate the output against expected results, with assertions
     */
    public function evaluate(mixed $output, array $datasetItem): void
    {
        $this->assert(
            new StringContains($datasetItem['reference']),
            $output,
        );
    }
} 

The logic is quite straightforward. The evaluator first load the dataset, and then run the evaluation for each item of the dataset.

In the run method you can execute your agentic entities with the example input and return the output. The output is then passed to the evaluate method where you can performs assetions comparing the output with a reference value or any other logic you want.

Defining The Dataset Loader

You can use anything you want as dataset. There are no predefined format. The evaluator class simply allows you to load a list of test cases and run the evaluators against them. You have two dataset loaders.

ArrayDataset

JsonDataset

You can eventually create a custom dataset loader implementing NeuronAI\Evaluation\Contracts\DatasetInterface.

Running Evaluations

If you have properly configured your composer file you can use the Neuron CLI to launch the evaluators:

Output Interfaces

The evaluation module uses a PHP configuration file to control how evaluation results are displayed. The config system supports multiple output drivers, enabling results to be sent to console, files, databases, or external APIs simultaneously.

Config File

Create the evaluation.php file in your project root:

You can declare an array of options for each output class. This configurations will be passed as arguments to the constructor of the output class implementation.

If no config file exists, the system defaults to ConsoleOutputDriver with standard output.

Creating Custom Output

Implement EvaluationOutputInterface to create custom output drivers:

Once you have created your output class you can register it in the configuration file, to be used the next time you run the evaluations.

Available Assertions

We provide a set of built-in assertion for the most common use case. You can also implement your own assertion to design custom scoring systems. Check the next section.

StringContains

StringContainsAll

Check if the output contains all keywords:

StringContainsAny

Check if the output contains any of the keywords:

StringStartsWith

Check if the output starts with a prefix:

StringEndsWith

Check if the output ends with a suffix:

StringLengthBetween

Check if the string length is within range:

StringDistance

Check string similarity using Levenshtein distance:

StringSimilarity

Check string similarity using embeddings:

MatchesRegex

Match against regular expression:

IsValidJson

Check if the output is valid JSON:

Creating Custom Assertions

Last updated