Neuron AI
GitHubForumNewsletter
  • Getting Started
    • Introduction
  • Key Concepts
  • Installation
  • Agent
  • Tools & Function Calls
  • Streaming
  • RAG
  • Attachments (Documents & Images)
  • Advanced
    • Structured Output
    • Logging & Observability
    • MCP Connector
    • Error Handling
  • Post Processor
  • Components
    • AI provider
    • Chat History & Memory
    • Embeddings Provider
    • Vector Store
    • Data loader
  • Examples
    • YouTube Agent
Powered by GitBook
On this page
  • Available Embeddings Providers
  • Ollama
  • Voyage AI
  • OpenAI
  • Implement a new Provider
  1. Components

Embeddings Provider

Integrate services to transform text into vectors for semantic search.

PreviousChat History & MemoryNextVector Store

Last updated 4 days ago

Transform your text into vector representations! Embeddings let you add Retrieval-Augmented Generation () into your AI applications.

Available Embeddings Providers

The framework already includes the following embeddings provider.

Ollama

With Ollama you can run embedding models locally. Documentation -

namespace App\Neuron;

use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\OllamaEmbeddingsProvider;
use NeuronAI\RAG\RAG;

class MyRAG extends RAG
{
    ...
    
    protected function embeddings(): EmbeddingsProviderInterface
    {
        return new OllamaEmbeddingsProvider(
            model: 'OLLAMA_EMBEDDINGS_MODEL'
        );
    }
}

Voyage AI

namespace App\Neuron;

use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\VoyageEmbeddingsProvider;
use NeuronAI\RAG\RAG;

class MyRAG extends RAG
{
    ...
    
    protected function embeddings(): EmbeddingsProviderInterface
    {
        return new VoyageEmbeddingsProvider(
            key: 'VOYAGE_API_KEY',
            model: 'VOYAGE_EMBEDDINGS_MODEL'
        );
    }
}

OpenAI

namespace App\Neuron;

use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\OpenAIEmbeddingsProvider;
use NeuronAI\RAG\RAG;

class MyRAG extends RAG
{
    ...
    
    protected function embeddings(): EmbeddingsProviderInterface
    {
        return new OpenAIEmbeddingsProvider(
            key: 'OPENAI_API_KEY',
            model: 'OPENAI_EMBEDDINGS_MODEL'
        );
    }
}

Implement a new Provider

To create a custom provider you just have to extend the AbstractEmbeddingsProvider class. This class already implement the framework specific methods and let's you free to implement the only provider specific HTTP call into the embedText() method:

namespace App\Neuron\Embeddings;

use GuzzleHttp\Client;

class CustomEmbeddingsProvider extends AbstractEmbeddingsProvider
{
    protected Client $client;

    protected string $baseUri = 'HTTP-ENDPOINT';

    public function __construct(
        protected string $key,
        protected string $model
    ) {
        $this->client = new Client([
            'base_uri' => trim($this->baseUri, '/').'/',
            'headers' => [
                'Accept' => 'application/json',
                'Content-Type' => 'application/json',
                'Authorization' => 'Bearer ' . $this->key,
            ]
        ]);
    }

    public function embedText(string $text): array
    {
        $response = $this->client->post('', [
            'json' => [
                'model' => $this->model,
                'input' => $text,
            ]
        ]);

        $response = \json_decode($response->getBody()->getContents(), true);

        return $response['data'][0]['embedding'];
    }
}

You should adjust the HTTP request based on the APIs of the custom provider.

Documentation -

Documentation -

RAG
https://ollama.com/blog/embedding-models
https://www.voyageai.com/
https://platform.openai.com/docs/guides/embeddings