A Python library that converts OpenAPI specifications into Large Language Model (LLM) tool/function definitions, enabling OpenAPI invocations through LLM generated tool calls.
- Overview
- Features
- Installation
- Quick Start
- Customization:
from_spec
- Library Scope
- Requirements
- Development Setup
- Testing
- License
- Security
- Contributing
- Convert OpenAPI specifications into LLM-compatible tool/function definitions
- Support for multiple LLM providers (OpenAI, Anthropic, Cohere)
- Handle complex request bodies and parameter types
- Support for multiple authentication mechanisms
- Support for OpenAPI 3.0.x and 3.1.x specifications
- Handles both YAML and JSON OpenAPI specifications
pip install openapi-llm
- Python >= 3.8
By default, OpenAPI-LLM does not install any particular LLM provider. You can install exactly the ones you need:
pip install openai # For OpenAI
pip install anthropic # For Anthropic
pip install cohere # For Cohere
Below are minimal working examples for synchronous and asynchronous usage.
import os
from openai import OpenAI
from openapi_llm.client.openapi import OpenAPIClient
# Create the client from a spec URL (or file path, or raw string)
service_api = OpenAPIClient.from_spec(
openapi_spec="https://bit.ly/serperdev_openapi",
credentials=os.getenv("SERPERDEV_API_KEY")
)
# Initialize your chosen LLM provider (e.g., OpenAI)
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Ask the LLM to call the SerperDev API
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Do a serperdev google search: Who was Nikola Tesla?"}],
tools=service_api.tool_definitions, # LLM tool definitions from the client
)
# Now actually invoke the OpenAPI call based on the LLM's generated tool call
service_response = service_api.invoke(response)
assert "inventions" in str(service_response)
import os
import asyncio
from openapi_llm.client.openapi_async import AsyncOpenAPIClient
from openai import AsyncOpenAI
async def main():
# Firecrawl openapi spec
openapi_spec_url = "https://raw.githubusercontent.com/mendableai/firecrawl/main/apps/api/v1-openapi.json"
# Create the async client
service_api = AsyncOpenAPIClient.from_spec(
openapi_spec=openapi_spec_url,
credentials=os.getenv("FIRECRAWL_API_KEY")
)
# Initialize an async LLM (OpenAI)
client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Ask the LLM to call Firecrawl's scraping endpoint
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Scrape URL: https://news.ycombinator.com/"}],
tools=service_api.tool_definitions,
)
# Use context manager to manage aiohttp sessions
async with service_api as api:
service_response = await api.invoke(response)
assert isinstance(service_response, dict)
assert service_response.get("success", False), "Firecrawl scrape API call failed"
asyncio.run(main())
Both OpenAPIClient
and AsyncOpenAPIClient
provide a classmethod called from_spec
, which automatically:
- Loads the OpenAPI specification from a file path, URL, or raw string.
- Builds a
ClientConfig
for you. - Constructs the client instance.
For example, you can validate the spec before using it by supplying a custom config_factory
:
from openapi_llm.client.config import ClientConfig, create_client_config
from openapi_llm.client.openapi import OpenAPIClient
from openapi_spec_validator import validate_spec
def my_custom_config_factory(openapi_spec: str, **kwargs) -> ClientConfig:
config = create_client_config(openapi_spec, **kwargs)
validate_spec(config.openapi_spec.spec_dict)
return config
# Usage:
client = OpenAPIClient.from_spec(
openapi_spec="path/to/local_spec.yaml",
config_factory=my_custom_config_factory,
credentials="secret_token"
)
This design gives you full control over the spec-loading and configuration-building process while still offering simple defaults.
OpenAPI-LLM focuses on the core of bridging LLM function calls with OpenAPI specifications. It does not perform advanced validation or impose a high-level framework. You can integrate it into your existing app or build additional logic on top.
This library does not automatically validate your specs. If your OpenAPI file is invalid, you might see errors during usage. Tools like openapi-spec-validator or prance can help ensure correctness before you load your spec here.
- Python >= 3.8
- Dependencies:
- jsonref
- requests
- PyYAML
- Clone the repository:
git clone https://github.com/vblagoje/openapi-llm.git
- Install Hatch (if you haven’t already):
pip install hatch
- Install Pre-Commit Hooks:
pip install pre-commit
- Install Desired LLM Provider Dependencies (e.g., openai, anthropic, cohere):
pip install openai anthropic cohere
Run tests using hatch:
# Unit tests
hatch run test:unit
# Integration tests
hatch run test:integration
# Type checking
hatch run test:typing
# Linting
hatch run test:lint
This project is licensed under the MIT License. See the LICENSE file for more details.
For security concerns, please see our Security Policy.
Contributions are welcome! Please feel free to submit a Pull Request.
Vladimir Blagojevic ([email protected])
Early reviews and guidance by Madeesh Kannan