Embeddings Provider
Integrate services to transform text into vectors for semantic search.
Transform your text into vector representations! Embeddings let you add Retrieval-Augmented Generation (RAG) into your AI applications.
Available Embeddings Providers
The framework already includes the following embeddings provider.
Ollama
With Ollama you can run embedding models locally. Documentation - https://ollama.com/blog/embedding-models
namespace App\Neuron;
use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\OllamaEmbeddingsProvider;
use NeuronAI\RAG\RAG;
class MyRAG extends RAG
{
...
protected function embeddings(): EmbeddingsProviderInterface
{
return new OllamaEmbeddingsProvider(
model: 'OLLAMA_EMBEDDINGS_MODEL'
);
}
}Voyage AI
Documentation - https://www.voyageai.com/
OpenAI
Documentation - https://platform.openai.com/docs/guides/embeddings
Gemini
Aws Bedrock
Implement a new Provider
To create a custom provider you just have to extend the AbstractEmbeddingsProvider class. This class already implement the framework specific methods and let's you free to implement the only provider specific HTTP call into the embedText() method:
You should adjust the HTTP request based on the APIs of the custom provider.
Last updated