Skip to content

Commit 8f86280

Browse files
authored
Merge pull request #1036 from guardrails-ai/dtam/add_server_entry_to_readme
Update README.md
2 parents d89b958 + c79ab39 commit 8f86280

File tree

1 file changed

+37
-0
lines changed

1 file changed

+37
-0
lines changed

README.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,43 @@ This prints:
165165
}
166166
```
167167
168+
### Guardrails Server
169+
170+
Guardrails can be set up as a standalone service served by Flask with `guardrails start`, allowing you to interact with it via a REST API. This approach simplifies development and deployment of Guardrails-powered applications.
171+
172+
1. Install: `pip install "guardrails-ai"`
173+
2. Configure: `guardrails configure`
174+
3. Create a config: `guardrails create --validators=hub://guardrails/two_words --name=two-word-guard`
175+
4. Start the dev server: `guardrails start --config=./config.py`
176+
5. Interact with the dev server via the snippets below
177+
```
178+
# with the guardrails client
179+
import guardrails as gr
180+
181+
gr.settings.use_server = True
182+
guard = gr.Guard(name='two-word-guard')
183+
guard.validate('this is more than two words')
184+
185+
# or with the openai sdk
186+
import openai
187+
openai.base_url = "http://localhost:8000/guards/two-word-guard/openai/v1/"
188+
os.environ["OPENAI_API_KEY"] = "youropenaikey"
189+
190+
messages = [
191+
{
192+
"role": "user",
193+
"content": "tell me about an apple with 3 words exactly",
194+
},
195+
]
196+
197+
completion = openai.chat.completions.create(
198+
model="gpt-4o-mini",
199+
messages=messages,
200+
)
201+
```
202+
203+
For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
204+
168205
## FAQ
169206
170207
#### I'm running into issues with Guardrails. Where can I get help?

0 commit comments

Comments
 (0)