Skip to content

Commit 7974156

Browse files
authored
Merge pull request #1360 from guardrails-ai/v0.7.0-release
v0.7.0 Release
2 parents 8bd240e + b6c253e commit 7974156

File tree

13 files changed

+711
-1114
lines changed

13 files changed

+711
-1114
lines changed

.github/workflows/ci.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ jobs:
2020
runs-on: ubuntu-latest
2121
strategy:
2222
matrix:
23-
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
23+
python-version: ["3.10", "3.11", "3.12", "3.13"]
2424
steps:
2525
- uses: actions/checkout@v4
2626
- name: Set up Python ${{ matrix.python-version }}
@@ -48,7 +48,7 @@ jobs:
4848
runs-on: ubuntu-latest
4949
strategy:
5050
matrix:
51-
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
51+
python-version: ["3.10", "3.11", "3.12", "3.13"]
5252
steps:
5353
- uses: actions/checkout@v4
5454
- name: Set up Python ${{ matrix.python-version }}
@@ -75,7 +75,7 @@ jobs:
7575
runs-on: ubuntu-latest
7676
strategy:
7777
matrix:
78-
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
78+
python-version: ["3.10", "3.11", "3.12", "3.13"]
7979
steps:
8080
- uses: actions/checkout@v4
8181
- name: Set up Python ${{ matrix.python-version }}
@@ -102,7 +102,7 @@ jobs:
102102
runs-on: LargeBois
103103
strategy:
104104
matrix:
105-
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
105+
python-version: ["3.10", "3.11", "3.12", "3.13"]
106106
# TODO: fix errors so that we can run both `make dev` and `make full`
107107
# dependencies: ['dev', 'full']
108108
# dependencies: ["full"]

.github/workflows/cli-compatibility.yml

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,10 @@ jobs:
1010
runs-on: ubuntu-latest
1111
strategy:
1212
matrix:
13-
python-version: ["3.9", "3.10", "3.11", "3.12", "3.13"]
13+
python-version: ["3.10", "3.11", "3.12", "3.13"]
1414
typer-version: ["0.16.0", "0.17.0", "0.18.0", "0.19.2"]
1515
click-version: ["8.1.0", "8.2.0"]
1616
exclude:
17-
- python-version: "3.9"
18-
click-version: "8.2.0"
1917
- typer-version: "0.16.0"
2018
click-version: "8.2.0"
2119
- typer-version: "0.16.0"
Lines changed: 223 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,223 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "0f4d10ab",
6+
"metadata": {},
7+
"source": [
8+
"# LangChain\n",
9+
"\n",
10+
"## Overview\n",
11+
"\n",
12+
"This is a comprehensive guide on integrating Guardrails with [LangChain](https://github.com/langchain-ai/langchain), a framework for developing applications powered by large language models. By combining the validation capabilities of Guardrails with the flexible architecture of LangChain, you can create reliable and robust AI applications.\n",
13+
"\n",
14+
"### Key Features\n",
15+
"\n",
16+
"- **Easy Integration**: Guardrails can be seamlessly added to LangChain's LCEL syntax, allowing for quick implementation of validation checks.\n",
17+
"- **Flexible Validation**: Guardrails provides various validators that can be used to enforce structural, type, and quality constraints on LLM outputs.\n",
18+
"- **Corrective Actions**: When validation fails, Guardrails can take corrective measures, such as retrying LLM prompts or fixing outputs.\n",
19+
"- **Compatibility**: Works with different LLMs and can be used in various LangChain components like chains, agents, and retrieval strategies.\n",
20+
"\n",
21+
"## Prerequisites\n",
22+
"\n",
23+
"1. Ensure you have the following langchain packages installed. Also install Guardrails"
24+
]
25+
},
26+
{
27+
"cell_type": "code",
28+
"execution_count": null,
29+
"id": "382bd905",
30+
"metadata": {},
31+
"outputs": [],
32+
"source": [
33+
"! pip install guardrails-ai langchain langchain_openai"
34+
]
35+
},
36+
{
37+
"cell_type": "markdown",
38+
"id": "3594ef6c",
39+
"metadata": {},
40+
"source": [
41+
"2. As a prerequisite we install the necessary validators from the Guardrails Hub:"
42+
]
43+
},
44+
{
45+
"cell_type": "code",
46+
"execution_count": null,
47+
"id": "05635d8e",
48+
"metadata": {},
49+
"outputs": [],
50+
"source": [
51+
"! guardrails hub install hub://guardrails/competitor_check --quiet\n",
52+
"! guardrails hub install hub://guardrails/toxic_language --quiet"
53+
]
54+
},
55+
{
56+
"cell_type": "markdown",
57+
"id": "9ab6fdd9",
58+
"metadata": {},
59+
"source": [
60+
"- `CompetitorCheck`: Identifies and optionally removes mentions of specified competitor names.\n",
61+
"- `ToxicLanguage`: Detects and optionally removes toxic or inappropriate language from the output.\n"
62+
]
63+
},
64+
{
65+
"cell_type": "markdown",
66+
"id": "325a69d3",
67+
"metadata": {},
68+
"source": [
69+
"## Basic Integration\n",
70+
"\n",
71+
"Here's a basic example of how to integrate Guardrails with a LangChain LCEL chain:\n",
72+
"\n",
73+
"1. Import the required imports and do the OpenAI Model Initialization"
74+
]
75+
},
76+
{
77+
"cell_type": "code",
78+
"execution_count": 1,
79+
"id": "ae149cdb",
80+
"metadata": {},
81+
"outputs": [
82+
{
83+
"name": "stderr",
84+
"output_type": "stream",
85+
"text": [
86+
"/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
87+
" from .autonotebook import tqdm as notebook_tqdm\n"
88+
]
89+
}
90+
],
91+
"source": [
92+
"from langchain_openai import ChatOpenAI\n",
93+
"from langchain_core.output_parsers import StrOutputParser\n",
94+
"from langchain_core.prompts import ChatPromptTemplate\n",
95+
"\n",
96+
"model = ChatOpenAI(model=\"gpt-4\")"
97+
]
98+
},
99+
{
100+
"cell_type": "markdown",
101+
"id": "7701d9e4",
102+
"metadata": {},
103+
"source": [
104+
"2. Create a Guard object with two validators: CompetitorCheck and ToxicLanguage."
105+
]
106+
},
107+
{
108+
"cell_type": "code",
109+
"execution_count": 2,
110+
"id": "67f810db",
111+
"metadata": {},
112+
"outputs": [],
113+
"source": [
114+
"from guardrails import Guard\n",
115+
"from guardrails.hub import CompetitorCheck, ToxicLanguage\n",
116+
"\n",
117+
"competitors_list = [\"delta\", \"american airlines\", \"united\"]\n",
118+
"guard = Guard().use_many(\n",
119+
" CompetitorCheck(competitors=competitors_list, on_fail=\"fix\"),\n",
120+
" ToxicLanguage(on_fail=\"filter\"),\n",
121+
")"
122+
]
123+
},
124+
{
125+
"cell_type": "markdown",
126+
"id": "aff173e9",
127+
"metadata": {},
128+
"source": [
129+
"3. Define the LCEL chain components and pipe the prompt, model, output parser, and the Guard together.\n",
130+
"The `guard.to_runnable()` method converts the Guardrails guard into a LangChain-compatible runnable object."
131+
]
132+
},
133+
{
134+
"cell_type": "code",
135+
"execution_count": 3,
136+
"id": "7d6c4dc1",
137+
"metadata": {},
138+
"outputs": [],
139+
"source": [
140+
"prompt = ChatPromptTemplate.from_template(\"Answer this question {question}\")\n",
141+
"output_parser = StrOutputParser()\n",
142+
"\n",
143+
"chain = prompt | model | guard.to_runnable() | output_parser"
144+
]
145+
},
146+
{
147+
"cell_type": "markdown",
148+
"id": "f293a6f7",
149+
"metadata": {},
150+
"source": [
151+
"4. Invoke the chain"
152+
]
153+
},
154+
{
155+
"cell_type": "code",
156+
"execution_count": 4,
157+
"id": "7c037923",
158+
"metadata": {},
159+
"outputs": [
160+
{
161+
"name": "stderr",
162+
"output_type": "stream",
163+
"text": [
164+
"/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/guardrails/validator_service/__init__.py:84: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.\n",
165+
" warnings.warn(\n"
166+
]
167+
},
168+
{
169+
"name": "stdout",
170+
"output_type": "stream",
171+
"text": [
172+
"1. Southwest Airlines\n",
173+
"2. Delta Air Lines\n",
174+
"3. American Airlines\n",
175+
"4. United Airlines\n",
176+
"5. JetBlue Airways\n"
177+
]
178+
}
179+
],
180+
"source": [
181+
"result = chain.invoke(\n",
182+
" {\"question\": \"What are the top five airlines for domestic travel in the US?\"}\n",
183+
")\n",
184+
"print(result)"
185+
]
186+
},
187+
{
188+
"cell_type": "markdown",
189+
"id": "2cd4c67b",
190+
"metadata": {},
191+
"source": [
192+
"Example output:\n",
193+
" ```\n",
194+
" 1. Southwest Airlines\n",
195+
" 3. JetBlue Airways\n",
196+
" ```\n",
197+
"\n",
198+
"In this example, the chain sends the question to the model and then applies Guardrails validators to the response. The CompetitorCheck validator specifically removes mentions of the specified competitors (Delta, American Airlines, United), resulting in a filtered list of non-competitor airlines."
199+
]
200+
}
201+
],
202+
"metadata": {
203+
"kernelspec": {
204+
"display_name": ".venv",
205+
"language": "python",
206+
"name": "python3"
207+
},
208+
"language_info": {
209+
"codemirror_mode": {
210+
"name": "ipython",
211+
"version": 3
212+
},
213+
"file_extension": ".py",
214+
"mimetype": "text/x-python",
215+
"name": "python",
216+
"nbconvert_exporter": "python",
217+
"pygments_lexer": "ipython3",
218+
"version": "3.10.16"
219+
}
220+
},
221+
"nbformat": 4,
222+
"nbformat_minor": 5
223+
}
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
import CodeOutputBlock from '../../code-output-block.jsx';
2+
3+
# LangChain
4+
5+
## Overview
6+
7+
This is a comprehensive guide on integrating Guardrails with [LangChain](https://github.com/langchain-ai/langchain), a framework for developing applications powered by large language models. By combining the validation capabilities of Guardrails with the flexible architecture of LangChain, you can create reliable and robust AI applications.
8+
9+
### Key Features
10+
11+
- **Easy Integration**: Guardrails can be seamlessly added to LangChain's LCEL syntax, allowing for quick implementation of validation checks.
12+
- **Flexible Validation**: Guardrails provides various validators that can be used to enforce structural, type, and quality constraints on LLM outputs.
13+
- **Corrective Actions**: When validation fails, Guardrails can take corrective measures, such as retrying LLM prompts or fixing outputs.
14+
- **Compatibility**: Works with different LLMs and can be used in various LangChain components like chains, agents, and retrieval strategies.
15+
16+
## Prerequisites
17+
18+
1. Ensure you have the following langchain packages installed. Also install Guardrails
19+
20+
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
21+
22+
23+
```bash
24+
pip install guardrails-ai langchain langchain_openai
25+
```
26+
27+
2. As a prerequisite we install the necessary validators from the Guardrails Hub:
28+
29+
30+
```bash
31+
guardrails hub install hub://guardrails/competitor_check --quiet
32+
guardrails hub install hub://guardrails/toxic_language --quiet
33+
```
34+
35+
- `CompetitorCheck`: Identifies and optionally removes mentions of specified competitor names.
36+
- `ToxicLanguage`: Detects and optionally removes toxic or inappropriate language from the output.
37+
38+
39+
## Basic Integration
40+
41+
Here's a basic example of how to integrate Guardrails with a LangChain LCEL chain:
42+
43+
1. Import the required imports and do the OpenAI Model Initialization
44+
45+
46+
```python
47+
from langchain_openai import ChatOpenAI
48+
from langchain_core.output_parsers import StrOutputParser
49+
from langchain_core.prompts import ChatPromptTemplate
50+
51+
model = ChatOpenAI(model="gpt-4")
52+
```
53+
54+
<CodeOutputBlock lang="python">
55+
56+
```
57+
/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
58+
from .autonotebook import tqdm as notebook_tqdm
59+
```
60+
61+
</CodeOutputBlock>
62+
63+
2. Create a Guard object with two validators: CompetitorCheck and ToxicLanguage.
64+
65+
66+
```python
67+
from guardrails import Guard
68+
from guardrails.hub import CompetitorCheck, ToxicLanguage
69+
70+
competitors_list = ["delta", "american airlines", "united"]
71+
guard = Guard().use_many(
72+
CompetitorCheck(competitors=competitors_list, on_fail="fix"),
73+
ToxicLanguage(on_fail="filter"),
74+
)
75+
```
76+
77+
3. Define the LCEL chain components and pipe the prompt, model, output parser, and the Guard together.
78+
The `guard.to_runnable()` method converts the Guardrails guard into a LangChain-compatible runnable object.
79+
80+
81+
```python
82+
prompt = ChatPromptTemplate.from_template("Answer this question {question}")
83+
output_parser = StrOutputParser()
84+
85+
chain = prompt | model | guard.to_runnable() | output_parser
86+
```
87+
88+
4. Invoke the chain
89+
90+
91+
```python
92+
result = chain.invoke(
93+
{"question": "What are the top five airlines for domestic travel in the US?"}
94+
)
95+
print(result)
96+
```
97+
98+
<CodeOutputBlock lang="python">
99+
100+
```
101+
/Users/calebcourier/Projects/support/langchain/.venv/lib/python3.12/site-packages/guardrails/validator_service/__init__.py:84: UserWarning: Could not obtain an event loop. Falling back to synchronous validation.
102+
warnings.warn(
103+
104+
105+
1. Southwest Airlines
106+
2. Delta Air Lines
107+
3. American Airlines
108+
4. United Airlines
109+
5. JetBlue Airways
110+
```
111+
112+
</CodeOutputBlock>
113+
114+
Example output:
115+
```
116+
1. Southwest Airlines
117+
3. JetBlue Airways
118+
```
119+
120+
In this example, the chain sends the question to the model and then applies Guardrails validators to the response. The CompetitorCheck validator specifically removes mentions of the specified competitors (Delta, American Airlines, United), resulting in a filtered list of non-competitor airlines.

0 commit comments

Comments
 (0)