Skip to content

Commit

Permalink
Merge pull request #19 from Cyb3rWard0g/feature/nvidia-embeddings
Browse files Browse the repository at this point in the history
Integration of NVIDIA Embedding Client and Embedder
  • Loading branch information
Cyb3rWard0g authored Jan 12, 2025
2 parents 365cf9a + 59125f0 commit af7956e
Show file tree
Hide file tree
Showing 8 changed files with 474 additions and 5 deletions.
260 changes: 260 additions & 0 deletions cookbook/llm/nvidia_embeddings_basic.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,260 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LLM: NVIDIA Embeddings Endpoint Basic Examples\n",
"\n",
"This notebook demonstrates how to use the `NVIDIAEmbedder` in `Floki` for generating text embeddings. We will explore:\n",
"\n",
"* Initializing the `NVIDIAEmbedder`.\n",
"* Generating embeddings for single and multiple inputs.\n",
"* Using the class both as a direct function and via its `embed` method."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install Required Libraries\n",
"Before starting, ensure the required libraries are installed:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install floki"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load Environment Variables\n",
"\n",
"Load API keys or other configuration values from your `.env` file using `dotenv`."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from dotenv import load_dotenv\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import NVIDIAEmbedder"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from floki.document.embedder import NVIDIAEmbedder"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize the NVIDIAEmbedder\n",
"\n",
"To start, create an instance of the `NVIDIAEmbedder` class."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"# Initialize the embedder\n",
"embedder = NVIDIAEmbedder(\n",
" model=\"nvidia/nv-embedqa-e5-v5\", # Default embedding model\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding a Single Text\n",
"\n",
"You can use the embed method to generate an embedding for a single input string."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Embedding (first 5 values): [-0.007270217100869654, -0.03521439888521964, 0.008612880489907491, 0.03619088134997443, 0.03658757735128107]\n"
]
}
],
"source": [
"# Input text\n",
"text = \"The quick brown fox jumps over the lazy dog.\"\n",
"\n",
"# Generate embedding\n",
"embedding = embedder.embed(text)\n",
"\n",
"# Display the embedding\n",
"print(f\"Embedding (first 5 values): {embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Embedding Multiple Texts\n",
"\n",
"The embed method also supports embedding multiple texts at once."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Text 1 embedding (first 5 values): [-0.007270217100869654, -0.03521439888521964, 0.008612880489907491, 0.03619088134997443, 0.03658757735128107]\n",
"Text 2 embedding (first 5 values): [0.03491632278487177, -0.045598764196327295, 0.014955417976037734, 0.049291836798573345, 0.03741906620126992]\n"
]
}
],
"source": [
"# Input texts\n",
"texts = [\n",
" \"The quick brown fox jumps over the lazy dog.\",\n",
" \"A journey of a thousand miles begins with a single step.\"\n",
"]\n",
"\n",
"# Generate embeddings\n",
"embeddings = embedder.embed(texts)\n",
"\n",
"# Display the embeddings\n",
"for i, emb in enumerate(embeddings):\n",
" print(f\"Text {i + 1} embedding (first 5 values): {emb[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using the NVIDIAEmbedder as a Callable Function\n",
"\n",
"The `NVIDIAEmbedder` class can also be used directly as a function, thanks to its `__call__` implementation."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Embedding (first 5 values): [-0.005809799816153762, -0.08734154733463988, -0.017593431879252233, 0.027511671880565285, 0.001342777107870075]\n"
]
}
],
"source": [
"# Use the class instance as a callable\n",
"text_embedding = embedder(\"A stitch in time saves nine.\")\n",
"\n",
"# Display the embedding\n",
"print(f\"Embedding (first 5 values): {text_embedding[:5]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For multiple inputs:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Text 1 embedding (first 5 values): [0.021093917798446042, -0.04365205548745667, 0.02008726662368289, 0.024922242720651362, 0.024556187748010216]\n",
"Text 2 embedding (first 5 values): [-0.006683721130524534, -0.05764852452568794, 0.01164408689824411, 0.04627132894469238, 0.03458911471541276]\n"
]
}
],
"source": [
"text_list = [\"The early bird catches the worm.\", \"An apple a day keeps the doctor away.\"]\n",
"embeddings_list = embedder(text_list)\n",
"\n",
"# Display the embeddings\n",
"for i, emb in enumerate(embeddings_list):\n",
" print(f\"Text {i + 1} embedding (first 5 values): {emb[:5]}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
2 changes: 1 addition & 1 deletion src/floki/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@
)
from floki.llm.openai import OpenAIChatClient, OpenAIAudioClient, OpenAIEmbeddingClient
from floki.llm.huggingface import HFHubChatClient
from floki.llm.nvidia import NVIDIAChatClient
from floki.llm.nvidia import NVIDIAChatClient, NVIDIAEmbeddingClient
from floki.tool import AgentTool, tool
from floki.workflow import WorkflowApp
2 changes: 1 addition & 1 deletion src/floki/document/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from .fetcher import ArxivFetcher
from .reader import PyMuPDFReader, PyPDFReader
from .splitter import TextSplitter
from .embedder import OpenAIEmbedder, SentenceTransformerEmbedder
from .embedder import OpenAIEmbedder, SentenceTransformerEmbedder, NVIDIAEmbedder
3 changes: 2 additions & 1 deletion src/floki/document/embedder/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
from .openai import OpenAIEmbedder
from .sentence import SentenceTransformerEmbedder
from .sentence import SentenceTransformerEmbedder
from .nvidia import NVIDIAEmbedder
106 changes: 106 additions & 0 deletions src/floki/document/embedder/nvidia.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
from floki.llm.nvidia.embeddings import NVIDIAEmbeddingClient
from floki.document.embedder.base import EmbedderBase
from typing import List, Union
from pydantic import Field
import numpy as np
import logging

logger = logging.getLogger(__name__)

class NVIDIAEmbedder(NVIDIAEmbeddingClient, EmbedderBase):
"""
NVIDIA-based embedder for generating text embeddings with support for indexing (passage) and querying.
Inherits functionality from NVIDIAEmbeddingClient for API interactions.
Attributes:
chunk_size (int): Batch size for embedding requests. Defaults to 1000.
normalize (bool): Whether to normalize embeddings. Defaults to True.
"""

chunk_size: int = Field(default=1000, description="Batch size for embedding requests.")
normalize: bool = Field(default=True, description="Whether to normalize embeddings.")

def embed(self, input: Union[str, List[str]]) -> Union[List[float], List[List[float]]]:
"""
Embeds input text(s) for indexing with default input_type set to 'passage'.
Args:
input (Union[str, List[str]]): Input text(s) to embed. Can be a single string or a list of strings.
Returns:
Union[List[float], List[List[float]]]: Embedding vector(s) for the input(s).
- Returns a single list of floats for a single string input.
- Returns a list of lists of floats for a list of string inputs.
Raises:
ValueError: If input is invalid or embedding generation fails.
"""
return self._generate_embeddings(input, input_type="passage")

def embed_query(self, input: Union[str, List[str]]) -> Union[List[float], List[List[float]]]:
"""
Embeds input text(s) for querying with input_type set to 'query'.
Args:
input (Union[str, List[str]]): Input text(s) to embed. Can be a single string or a list of strings.
Returns:
Union[List[float], List[List[float]]]: Embedding vector(s) for the input(s).
- Returns a single list of floats for a single string input.
- Returns a list of lists of floats for a list of string inputs.
Raises:
ValueError: If input is invalid or embedding generation fails.
"""
return self._generate_embeddings(input, input_type="query")

def _generate_embeddings(self, input: Union[str, List[str]], input_type: str) -> Union[List[float], List[List[float]]]:
"""
Helper function to generate embeddings for given input text(s) with specified input_type.
Args:
input (Union[str, List[str]]): Input text(s) to embed.
input_type (str): The type of embedding operation ('query' or 'passage').
Returns:
Union[List[float], List[List[float]]]: Embedding vector(s) for the input(s).
"""
# Validate input
if not input or (isinstance(input, list) and all(not q for q in input)):
raise ValueError("Input must contain valid text.")

single_input = isinstance(input, str)
input_list = [input] if single_input else input

# Process input in chunks for efficiency
chunk_embeddings = []
for i in range(0, len(input_list), self.chunk_size):
batch = input_list[i:i + self.chunk_size]
response = self.create_embedding(input=batch, input_type=input_type)
chunk_embeddings.extend(r.embedding for r in response.data)

# Normalize embeddings if required
if self.normalize:
normalized_embeddings = [
(embedding / np.linalg.norm(embedding)).tolist() for embedding in chunk_embeddings
]
else:
normalized_embeddings = chunk_embeddings

# Return a single embedding if the input was a single string; otherwise, return a list
return normalized_embeddings[0] if single_input else normalized_embeddings

def __call__(self, input: Union[str, List[str]], query: bool = False) -> Union[List[float], List[List[float]]]:
"""
Allows the instance to be called directly to embed text(s).
Args:
input (Union[str, List[str]]): The input text(s) to embed.
query (bool): If True, embeds for querying (input_type='query'). Otherwise, embeds for indexing (input_type='passage').
Returns:
Union[List[float], List[List[float]]]: Embedding vector(s) for the input(s).
"""
if query:
return self.embed_query(input)
return self.embed(input)
Loading

0 comments on commit af7956e

Please sign in to comment.