The official Pulse SDK for Python.
- Low‑level CoreClient for direct API calls: embeddings, similarity, themes, clustering, sentiment, summaries, extractions
- High‑level Analyzer for orchestrating multi‑step workflows with caching
- Built-in processes: ThemeGeneration, ThemeAllocation, SentimentProcess, Cluster
- Result helpers: pandas DataFrame conversion, summaries, visualizations (bar charts, scatter, dendrogram)
- On‑disk and in‑memory caching via diskcache
- First-class interop with pandas, NumPy, and scikit‑learn
- Online docs: https://researchwiseai.github.io/pulse-py/
- In-repo docs: see
docs/README.mdfor the index. - Build with MkDocs:
- Install:
pip install mkdocs mkdocs-material - Serve locally:
mkdocs serve(http://127.0.0.1:8000) - Build static site:
mkdocs build
- Install:
Install with all features (recommended):
pip install pulse-sdk[all]Minimal Installation (API access only):
pip install pulse-sdk[minimal]Custom Installation (choose your features):
# Data science workflow
pip install pulse-sdk[analysis,visualization,caching]
# Web service integration
pip install pulse-sdk[minimal,progress]
# Complete NLP pipeline
pip install pulse-sdk[analysis,nlp,progress]Available Feature Sets:
minimal- Core API access only (httpx, pydantic)analysis- Data science tools (numpy, pandas, scikit-learn)visualization- Plotting capabilities (matplotlib, seaborn)nlp- Text processing utilities (textblob)caching- Performance optimization (diskcache)progress- Progress bars (tqdm)all- Everything includeddev- Development tools (testing, formatting, linting)
Get the repository and install editable with developer dependencies:
git clone https://github.com/researchwiseai/pulse-py.git
cd pulse-py
python -m venv venv # create a virtual environment (optional but recommended)
source venv/bin/activate # on Windows use `venv\\Scripts\\activate`
pip install -e ".[dev]" # install pulse-sdk plus dev tools (pytest, black, ruff, etc.)
pre-commit install # set up formatting/linting on commit📖 Need help choosing? See our complete installation guide for detailed explanations, troubleshooting, and version compatibility.
Once installed, you can quickly try out the core and DSL APIs.
from pulse.core.client import CoreClient
# Basic usage
client = CoreClient()
emb = client.create_embeddings(["Hello world", "Goodbye"], fast=True)
print(emb.embeddings)
print("total usage:", emb.usage_total)
# Submit a long-running job asynchronously
job = client.create_embeddings(["foo"] * 300, fast=False, await_job_result=False)
result = job.wait()Secure your requests by providing an OAuth2 auth object to CoreClient:
from pulse.core.client import CoreClient
from pulse.auth import ClientCredentialsAuth, AuthorizationCodePKCEAuth
# Client Credentials flow
auth = ClientCredentialsAuth() # Automatically reads PULSE_CLIENT_ID and PULSE_CLIENT_SECRET
client = CoreClient(auth=auth)
resp = client.create_embeddings(["Hello world", "Goodbye"]) # will include Authorization header
# Authorization Code flow with PKCE, interactive login, requries local webserver
auth = AuthorizationCodePKCEAuth() # Allows user's to login with their RWAI Account
client = CoreClient(auth=auth)
resp = client.create_embeddings(["Hello world", "Goodbye"])All feature responses include usage information when available:
resp = client.create_embeddings(["Hello world"], fast=True)
print(resp.usage_total)
for record in resp.usage.records:
print(record.feature, record.units)from pulse.starters import summarize
# Works with a list of strings or a file path
summary = summarize("reviews.txt", question="What do people think?")
print(summary.summary)from pulse.core.client import CoreClient
client = CoreClient()
resp = client.generate_summary(
["Great food, slow service"],
"What do diners mention?",
length="short", # optional
preset="five-point", # optional
fast=True,
)
print(resp.summary)from pulse.starters import cluster_analysis
# Cluster comments from a CSV file into two groups
clusters = cluster_analysis("reviews.csv", k=2)
print(clusters.clusters)from pulse.core.client import CoreClient
client = CoreClient()
resp = client.cluster_texts(
["Good", "Bad", "Okay"],
k=2,
algorithm="skmeans", # optional
fast=True,
)
print(resp.clusters)client = CoreClient()
resp = client.extract_elements(
texts=["The food was great and the service was slow."],
categories=["food", "service"],
dictionary={"food": ["food"], "service": ["service"]}, # optional
use_ner=True, # optional
use_llm=False, # optional
fast=True,
)
print(resp.columns)
print(resp.matrix)import time
client = CoreClient()
job = client.analyze_sentiment(["hello"], fast=False, await_job_result=False)
while True:
status = client.get_job_status(job.id)
if status.status == "completed":
result = client.client.get(status.result_url).json()
break
time.sleep(1)
print(result)Job.result() is an alias for wait() if you prefer a blocking call.
from pulse.analysis.analyzer import Analyzer
from pulse.analysis.processes import ThemeGeneration, SentimentProcess
texts = ["I love pizza", "I hate rain"]
processes = [ThemeGeneration(min_themes=2), SentimentProcess()]
with Analyzer(dataset=texts, processes=processes, cache_dir=".pulse_cache") as az:
results = az.run()
print(results.theme_generation.to_dataframe())
print(results.sentiment.summary())from pulse.dsl import Workflow
# Example dataset
texts = ["I love pizza", "I hate rain"]
# Define lifecycle callbacks
def on_run_start():
print("Workflow starting")
def on_process_start(process_id):
print(f"Starting process: {process_id}")
def on_process_end(process_id, result):
print(f"Finished process: {process_id}, result: {result}")
def on_run_end():
print("Workflow finished")
# Build and run workflow
wf = (
Workflow()
.source("docs", texts)
.theme_generation(source="docs", min_themes=2)
.sentiment(source="docs")
.monitor(
on_run_start=on_run_start,
on_process_start=on_process_start,
on_process_end=on_process_end,
on_run_end=on_run_end,
)
)
results = wf.run()
# Access results
print(results.theme_generation.themes)
print(results.sentiment.sentiments)- context – provide additional context or focus for
generate_themes. - version – lock API calls (e.g.,
analyze_sentiment,generate_themes) to a specific model version. - algorithm – choose the clustering algorithm in
cluster_texts/cluster_analysis. - length and preset – control output style in
generate_summary.
You can find Jupyter notebooks demonstrating both the high-level and DSL APIs under the examples/ directory:
jupyter notebook examples/high_level_api.ipynb
jupyter notebook examples/dsl_api.ipynbFor authenticated access and test recording/playback, configure the following environment variables:
PULSE_CLIENT_ID: your OAuth2 client ID (e.g., Auth0 client ID).PULSE_CLIENT_SECRET: your OAuth2 client secret.PULSE_TOKEN_URL(optional): token endpoint URL. Defaults tohttps://{AUTH_DOMAIN}/oauth/token.PULSE_AUDIENCE(optional): API audience URL. Defaults to env-based config (see below).PULSE_BASE_URL(optional): API base URL. Defaults to env-based config (see below).PULSE_AUTH_DOMAIN(optional): Auth0 domain. Defaults toresearch-wise-ai-eu.eu.auth0.com.PULSE_TOKEN_URL(optional): OAuth2 token endpoint URL.
Default configuration uses production endpoints:
PULSE_BASE_URL=https://pulse.researchwiseai.com/v1PULSE_AUDIENCE=https://core.researchwiseai.com/pulse/v1PULSE_AUTH_DOMAIN=research-wise-ai-eu.eu.auth0.com
In local development, you can export these variables:
export PULSE_CLIENT_ID="your_client_id"
export PULSE_CLIENT_SECRET="your_client_secret"
# Optional: override default endpoints
export PULSE_BASE_URL="https://your-custom-endpoint.com/v1"In CI (e.g., GitHub Actions), add these values as repository secrets and reference them in your workflow:
env:
PULSE_CLIENT_ID: ${{ secrets.PULSE_CLIENT_ID }}
PULSE_CLIENT_SECRET: ${{ secrets.PULSE_CLIENT_SECRET }}Use Python 3.8+ and a virtual environment.
- Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\\Scripts\\activate- Install dependencies (SDK + dev tools)
pip install -e ".[dev]"- Install pre-commit hooks
pre-commit install
pre-commit install --hook-type commit-msg
# optional: run once on all files
pre-commit run --all-files- Run tests
make test
# or
pytest- Re-record HTTP cassettes when needed
make vcr-record- Formatting and linting
black .
nbqa black .
ruff check pulse tests- Security scanning
# Run comprehensive security scans
./scripts/security-scan.sh
# Or run individual tools
bandit -r pulse --exclude pulse/core/.ipynb_checkpoints --skip B101,B110,B105,B311,B403,B601
pip-audit --format=columnsNote: For onboarding, see First-Time Setup above.
- Use Python 3.8+.
- Create and activate a virtual environment, then install dev deps:
python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -e .[dev]
- Install pre-commit hooks (auto-runs formatters/linters on commit):
pre-commit install pre-commit install --hook-type commit-msg # optional: run hooks on all files once pre-commit run --all-files
This project uses Conventional Commits for automated changelog generation. Please format your commit messages as:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Types: feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert
Examples:
feat: add sentiment analysis cachingfix: handle network timeout in auth flowdocs: update quick start guidefeat!: change API response format(breaking change)
See scripts/conventional-commits-guide.md for detailed guidance.
- Format Python:
black .(configured to line length 88) - Format notebooks:
nbqa black . - Lint:
ruff check pulse tests - Note: these commands are also enforced by pre-commit.
- Run tests:
make test # or directly pytest
- Many tests require OAuth credentials. Set:
PULSE_CLIENT_IDPULSE_CLIENT_SECRET- Optional:
PULSE_TOKEN_URL,PULSE_AUDIENCE
- CI runs pytest with:
pytest -q --disable-warnings --maxfail=1 --vcr-record=none
- Re-record all cassettes from scratch:
make vcr-record
python -m build- Keep changes backward compatible with existing models and APIs.
- Avoid committing large datasets or generated notebook outputs.
Feel free to open issues or submit pull requests at the GitHub repo.
This project is licensed under the MIT License. See LICENSE for details.