Skip to content

Commit 54be587

Browse files
authored
⬆️v0.2 pull request #16 from shroominic/v0.2
⬆️v0.2 - new features + docs
2 parents 2bdf3cb + a81e9af commit 54be587

File tree

152 files changed

+6651
-2491
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

152 files changed

+6651
-2491
lines changed

.github/workflows/code-check.yml

+16-16
Original file line numberDiff line numberDiff line change
@@ -6,21 +6,21 @@ jobs:
66
pre-commit:
77
strategy:
88
matrix:
9-
python-version: ['3.10', '3.11', '3.12']
9+
python-version: ["3.10", "3.11"]
1010
runs-on: ubuntu-latest
1111
steps:
12-
- uses: actions/checkout@v2
13-
- uses: eifinger/setup-rye@v1
14-
with:
15-
enable-cache: true
16-
cache-prefix: 'venv-funcchain'
17-
- name: pin version
18-
run: rye pin ${{ matrix.python-version }}
19-
- name: Sync rye
20-
run: rye sync
21-
- name: Run pre-commit
22-
run: rye run pre-commit run --all-files
23-
- name: Run tests
24-
env:
25-
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
26-
run: rye run pytest -m "not skip_on_actions"
12+
- uses: actions/checkout@v2
13+
- uses: eifinger/setup-rye@v1
14+
with:
15+
enable-cache: true
16+
cache-prefix: "venv-funcchain"
17+
- name: pin version
18+
run: rye pin ${{ matrix.python-version }}
19+
- name: Sync rye
20+
run: rye sync
21+
- name: Run pre-commit
22+
run: rye run pre-commit run --all-files
23+
- name: Run tests
24+
env:
25+
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
26+
run: rye run pytest -m "not skip_on_actions"

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -165,3 +165,4 @@ cython_debug/
165165

166166
vscext
167167
.models
168+
.python-version

.pre-commit-config.yaml

+12-21
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,14 @@
11
repos:
2+
- repo: https://github.com/pre-commit/pre-commit-hooks
3+
rev: v4.5.0
4+
hooks:
5+
- id: end-of-file-fixer
6+
- id: trailing-whitespace
27

3-
- repo: https://github.com/pre-commit/pre-commit-hooks
4-
rev: v4.5.0
5-
hooks:
6-
- id: check-yaml
7-
- id: end-of-file-fixer
8-
- id: trailing-whitespace
9-
10-
- repo: https://github.com/pre-commit/mirrors-mypy
11-
rev: v1.7.1
12-
hooks:
13-
- id: mypy
14-
args: [--ignore-missing-imports, --follow-imports=skip]
15-
additional_dependencies: [types-requests]
16-
17-
- repo: https://github.com/astral-sh/ruff-pre-commit
18-
rev: v0.1.7
19-
hooks:
20-
- id: ruff
21-
args: [ --fix ]
22-
- id: ruff-format
23-
types_or: [ python, pyi, jupyter ]
8+
- repo: https://github.com/astral-sh/ruff-pre-commit
9+
rev: v0.1.7
10+
hooks:
11+
- id: ruff
12+
args: [--fix]
13+
- id: ruff-format
14+
types_or: [python, pyi, jupyter]

.python-version

-1
This file was deleted.

CONTRIBUTING.md

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Contributing
2+
3+
To contribute, clone the repo and run:
4+
5+
```bash
6+
./dev_setup.sh
7+
```
8+
9+
You should not run unstrusted scripts so ask ChatGPT to explain what the contents of this script do!
10+
11+
This will install and setup your development environment using [rye](https://rye-up.com) or pip.

MODELS.md

+16-11
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22

33
## LangChain Chat Models
44

5-
You can set the `settings.llm` with any ChatModel the LangChain library.
5+
You can set the `settings.llm` with any LangChain ChatModel.
66

77
```python
8-
from langchain.chat_models import AzureChatOpenAI
8+
from langchain_openai.chat_models import AzureChatOpenAI
99

1010
settings.llm = AzureChatOpenAI(...)
1111
```
@@ -16,23 +16,28 @@ You can also set the `settings.llm` with a string identifier of a ChatModel incl
1616

1717
### Schema
1818

19-
`<provider>/<name>:<optional_label>`
19+
`<provider>/<model_name>:<optional_label>`
2020

2121
### Providers
2222

2323
- `openai`: OpenAI Chat Models
24-
- `gguf`: Huggingface GGUF Models from TheBloke using LlamaCpp
25-
- `local` | `thebloke` | `huggingface`: alias for `gguf`
24+
- `llamacpp`: Run local models directly using llamacpp (alias: `thebloke`, `gguf`)
25+
- `ollama`: Run local models through Ollama (wrapper for llamacpp)
26+
- `azure`: Azure Chat Models
27+
- `anthropic`: Anthropic Chat Models
28+
- `google`: Google Chat Models
2629

2730
### Examples
2831

29-
- `openai/gpt-3.5-turbo`: Classic ChatGPT
30-
- `gguf/deepseek-llm-7b-chat`: DeepSeek LLM 7B Chat
31-
- `gguf/OpenHermes-2.5-7B`: OpenHermes 2.5
32-
- `TheBloke/deepseek-llm-7B-chat-GGUF:Q3_K_M`: (eg thebloke huggingface identifier)
33-
- `local/neural-chat-7B-v3-1`: Neural Chat 7B (local as alias for gguf)
32+
- `openai/gpt-3.5-turbo`: ChatGPT Classic
33+
- `openai/gpt-4-1106-preview`: GPT-4-Turbo
34+
- `ollama/openchat`: OpenChat3.5-1210
35+
- `ollama/openhermes2.5-mistral`: OpenHermes 2.5
36+
- `llamacpp/openchat-3.5-1210`: OpenChat3.5-1210
37+
- `TheBloke/Nous-Hermes-2-SOLAR-10.7B-GGUF`: alias for `llamacpp/...`
38+
- `TheBloke/openchat-3.5-0106-GGUF:Q3_K_L`: with Q label
3439

3540
### additional notes
3641

37-
Checkout the file `src/funcchain/utils/model_defaults.py` for the code that parses the string identifier.
42+
Checkout the file `src/funcchain/model/defaults.py` for the code that parses the string identifier.
3843
Feel free to create a PR to add more models to the defaults. Or tell me how wrong I am and create a better system.

README.md

+28-21
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,14 @@
88
[![Twitter Follow](https://img.shields.io/twitter/follow/shroominic?style=social)](https://x.com/shroominic)
99

1010
```bash
11-
> pip install "funcchain[all]"
11+
pip install funcchain
1212
```
1313

1414
## Introduction
1515

1616
`funcchain` is the *most pythonic* way of writing cognitive systems. Leveraging pydantic models as output schemas combined with langchain in the backend allows for a seamless integration of llms into your apps.
17-
It works perfect with OpenAI Functions or LlamaCpp grammars (json-schema-mode).
17+
It utilizes perfect with OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output.
18+
In the backend it compiles the funcchain syntax into langchain runnables so you can easily invoke, stream or batch process your pipelines.
1819

1920
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/ricklamers/funcchain-demo)
2021

@@ -94,7 +95,7 @@ match lst:
9495
## Vision Models
9596

9697
```python
97-
from PIL import Image
98+
from funcchain import Image
9899
from pydantic import BaseModel, Field
99100
from funcchain import chain, settings
100101

@@ -132,7 +133,7 @@ from pydantic import BaseModel, Field
132133
from funcchain import chain, settings
133134

134135
# auto-download the model from huggingface
135-
settings.llm = "gguf/openhermes-2.5-mistral-7b"
136+
settings.llm = "ollama/openchat"
136137

137138
class SentimentAnalysis(BaseModel):
138139
analysis: str
@@ -153,32 +154,38 @@ print(poem.analysis)
153154

154155
## Features
155156

156-
- minimalistic and easy to use
157-
- easy swap between openai and local models
158-
- write prompts as python functions
159-
- pydantic models for output schemas
160-
- langchain core in the backend
161-
- fstrings or jinja templates for prompts
162-
- fully utilises OpenAI Functions or LlamaCpp Grammars
157+
- pythonic
158+
- easy swap between openai or local models
159+
- dynamic output types (pydantic models, or primitives)
160+
- vision llm support
161+
- langchain_core as backend
162+
- jinja templating for prompts
163+
- reliable structured output
164+
- auto retry parsing
163165
- langsmith support
164-
- async and pythonic
165-
- auto gguf model download from huggingface
166-
- streaming support
166+
- sync, async, streaming, parallel, fallbacks
167+
- gguf download from huggingface
168+
- type hints for all functions and mypy support
169+
- chat router component
170+
- composable with langchain LCEL
171+
- easy error handling
172+
- enums and literal support
173+
- custom parsing types
167174

168175
## Documentation
169176

170-
Highly recommend to try out the examples in the `./examples` folder.
177+
[Checkout the docs here](https://shroominic.github.io/funcchain/) 👈
171178

172-
Coming soon... feel free to add helpful .md files :)
179+
Also highly recommend to try and run the examples in the `./examples` folder.
173180

174181
## Contribution
175182

176-
You want to contribute? That's great! Please run the dev setup to get started:
183+
You want to contribute? Thanks, that's great!
184+
For more information checkout the [Contributing Guide](docs/contributing/dev-setup.md).
185+
Please run the dev setup to get started:
177186

178187
```bash
179-
> git clone https://github.com/shroominic/funcchain.git && cd funcchain
188+
git clone https://github.com/shroominic/funcchain.git && cd funcchain
180189

181-
> ./dev_setup.sh
190+
./dev_setup.sh
182191
```
183-
184-
Thanks!

dev-setup.sh

+33-7
Original file line numberDiff line numberDiff line change
@@ -3,16 +3,42 @@
33
# check if rye is installed
44
if ! command -v rye &> /dev/null
55
then
6-
echo "rye could not be found: installing now ..."
7-
curl -sSf https://rye-up.com/get | bash
8-
echo "Check the rye docs for more info: https://rye-up.com/"
6+
echo "rye could not be found"
7+
echo "Would you like to install via rye or pip? Enter 'rye' or 'pip':"
8+
read install_method
9+
clear
10+
11+
if [ "$install_method" = "rye" ]
12+
then
13+
echo "Installing via rye now ..."
14+
curl -sSf https://rye-up.com/get | bash
15+
echo "Check the rye docs for more info: https://rye-up.com/"
16+
17+
elif [ "$install_method" = "pip" ]
18+
then
19+
echo "Installing via pip now ..."
20+
python3 -m venv .venv
21+
source .venv/bin/activate
22+
pip install -r requirements.lock
23+
24+
else
25+
echo "Invalid option. Please run the script again and enter 'rye' or 'pip'."
26+
exit 1
27+
fi
28+
29+
clear
930
fi
1031

11-
echo "SYNC: setup .venv"
12-
rye sync
32+
if [ "$install_method" = "rye" ]
33+
then
34+
echo "SYNC: setup .venv"
35+
rye sync
36+
37+
echo "ACTIVATE: activate .venv"
38+
rye shell
1339

14-
echo "ACTIVATE: activate .venv"
15-
rye shell
40+
clear
41+
fi
1642

1743
echo "SETUP: install pre-commit hooks"
1844
pre-commit install

docs/advanced/async.md

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Async
2+
3+
## Why and how to use using async?
4+
5+
Asyncronous promgramming is a way to easily parallelize processes in python.
6+
This is very useful when dealing with LLMs because every request takes a long time and the python interpreter should do alot of other things in the meantime instead of waiting for the request.
7+
8+
Checkout [this brillian async tutorial](https://fastapi.tiangolo.com/async/) if you never coded in an asyncronous way.
9+
10+
## Async in FuncChain
11+
12+
You can use async in funcchain by creating your functions using `achain()` instead of the normal `chain()`.
13+
It would then look like this:
14+
15+
```python
16+
from funcchain import achain
17+
18+
async def generate_poem(topic: str) -> str:
19+
"""
20+
Generate a poem inspired by the given topic.
21+
"""
22+
return await achain()
23+
```
24+
25+
You can then `await` the async `generate_poem` function inside another async funtion or directly call it using `asyncio.run(generate_poem("birds"))`.
26+
27+
## Async in LangChain
28+
29+
When converting your funcchains into a langchain runnable you can use the native langchain way of async.
30+
This would be `.ainvoke(...)`, `.astream(...)` or `.abatch(...)` .
31+
32+
## Async Streaming
33+
34+
You can use langchains async streaming interface but also use the `stream_to(...)` wrapper (explained [here](../concepts/streaming.md#strem_to-wrapper)) as an async context manager.
35+
36+
```python
37+
async with stream_to(...):
38+
await ...
39+
```

docs/advanced/codebase-scaling.md

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Codebase Scaling
2+
3+
## Multi file projects
4+
5+
### TODO
6+
7+
## Structure
8+
9+
### TODO

docs/advanced/custom-parser-types.md

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
# Custom Parsers
2+
3+
## Example
4+
5+
### TODO
6+
7+
## Grammars
8+
9+
### TODO
10+
11+
## Format Instructions
12+
13+
### TODO
14+
15+
## parse() Function
16+
17+
### TODO
18+
19+
## Write your own Parser
20+
21+
### TODO

docs/advanced/customization.md

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Customization
2+
3+
## extra args inside chain
4+
5+
### TODO
6+
7+
## low level langchain
8+
9+
### TODO
10+
11+
## extra args inside @runnable
12+
13+
### TODO
14+
15+
## custom ll models
16+
17+
### TODO

docs/advanced/runnables.md

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# runnables
2+
3+
## LangChain Expression Language (LCEL)
4+
5+
### TODO
6+
7+
## Streaming, Parallel, Async and
8+
9+
### TODO

docs/advanced/signature.md

+9
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
# Signature
2+
3+
## Compilation
4+
5+
### TODO
6+
7+
## Schema
8+
9+
### TODO

0 commit comments

Comments
 (0)