A programming language where uncertainty is a first-class citizen.
Documentation β’ Quick Start β’ Packages β’ Examples β’ Contributing
If you're using
prism-uncertainty, please migrate to@prism-lang/corenpm uninstall prism-uncertainty npm install @prism-lang/core
Install using your preferred package manager:
# npm
npm install @prism-lang/core
npm install @prism-lang/confidence # optional
# yarn
yarn add @prism-lang/core
yarn add @prism-lang/confidence # optional
# pnpm
pnpm add @prism-lang/core
pnpm add @prism-lang/confidence # optional
# Install CLI globally
npm install -g @prism-lang/cli # or yarn/pnpmGet syntax highlighting and language support for VS Code:
# Download and install the extension
curl -L https://github.com/HaruHunab1320/Prism-TS/releases/download/v0.1.0/prism-lang-0.1.0.vsix -o prism-lang.vsix
code --install-extension prism-lang.vsix
rm prism-lang.vsixFeatures:
- β¨ Full syntax highlighting for all Prism features
- π¨ Semantic colors for confidence operators
- π Light and dark themes optimized for Prism
- π Auto-indentation and bracket matching
Create a file hello.prism:
// hello.prism
name = "World"
greeting = llm("Create a friendly greeting for ${name}")
console.log(greeting)
// Make decisions based on confidence
response = llm("Should we proceed?") ~> 0.75
uncertain if (response) {
high { console.log("β
Proceeding with confidence!") }
medium { console.log("β οΈ Proceeding with caution...") }
low { console.log("β Too uncertain, aborting.") }
}
Run it:
# Execute a Prism file
prism run hello.prism
# Or use the REPL for interactive development
prism
# Evaluate expressions directly
prism eval "2 + 2 ~> 0.99"import { parse, createRuntime } from '@prism-lang/core';
const code = `
// AI responses with confidence
analysis = llm("Is this secure?") ~> 0.85
// Confidence-aware decisions
uncertain if (analysis) {
high { deploy() }
medium { review() }
low { abort() }
}
`;
const ast = parse(code);
const runtime = createRuntime();
const result = await runtime.execute(ast);Prism is organized as a monorepo with focused, modular packages:
| Package | Description | Version |
|---|---|---|
@prism-lang/core |
Core language implementation (parser, runtime, types) | |
@prism-lang/confidence |
Confidence extraction from LLMs and other sources | |
@prism-lang/llm |
LLM provider integrations (Claude, Gemini, OpenAI) | |
@prism-lang/cli |
Command-line interface | |
@prism-lang/repl |
Interactive REPL |
Every AI application deals with uncertainty, but traditional languages pretend it doesn't exist. Prism makes uncertainty explicit and manageable.
// Traditional approach: Uncertainty is hidden
result = llm_call()
if (result) { /* hope for the best */ }
// Prism: Uncertainty is explicit
result = llm_call() ~> 0.7
uncertain if (result) {
high { proceed_with_confidence() }
medium { add_human_review() }
low { need_more_data() }
}
// Ensemble multiple models with confidence
claude_says = llm("Analyze risk", model: "claude") ~> 0.9
gpt_says = llm("Analyze risk", model: "gpt4") ~> 0.8
gemini_says = llm("Analyze risk", model: "gemini") ~> 0.7
// Automatically use highest confidence result
best_analysis = claude_says ~||> gpt_says ~||> gemini_says
// Confidence-aware null coalescing
decision = best_analysis ~?? fallback_analysis ~?? "manual_review"
With @prism-lang/confidence:
import { confidence } from '@prism-lang/confidence';
// Extract confidence from any LLM response
const response = await llm("Is this safe?");
const conf = await confidence.extract(response);
// Multiple strategies available
const ensemble = await confidence.fromConsistency(
() => llm("Analyze this"),
{ samples: 5 }
);
// Domain-specific calibration
const calibrated = await confidence.calibrators.security
.calibrate(conf, { type: 'sql_injection' });~>- Assign confidence<~- Extract confidence~*,~/,~+,~-- Confidence-preserving arithmetic~==,~!=,~>,~<- Confidence comparisons~&&,~||- Confidence logical operations~??- Confidence null coalescing~||>- Parallel confidence (ensemble)
// Uncertain conditionals
uncertain if (measurement) {
high { /* >70% confidence */ }
medium { /* 30-70% confidence */ }
low { /* <30% confidence */ }
}
// Uncertain loops
uncertain while (condition) {
confident { /* >70% */ }
attempt { /* 30-70% */ }
abort { /* <30% */ }
}
- First-class functions and lambdas
- Pattern matching with uncertainty
- Async/await with confidence propagation
- Destructuring with confidence preservation
- Type checking with
typeofandinstanceof
Note: We use pnpm and Turborepo for development. You'll need pnpm installed to contribute.
# Clone the repository
git clone https://github.com/cjpais/prism.git
cd prism
# Install pnpm if you don't have it
npm install -g pnpm
# Install dependencies
pnpm install
# Build all packages
pnpm build
# Run tests
pnpm test
# Start development mode
pnpm devWe use changesets to manage versioning and publishing. This ensures all packages stay in sync and peer dependencies are correctly managed.
-
Make your changes and commit them
-
Create a changeset to describe your changes:
pnpm changeset # or pnpm release:create- Select which packages changed
- Choose the bump type (patch/minor/major)
- Write a description for the changelog
-
Check what will be released:
pnpm release:check
-
Version the packages (updates package.json files and changelogs):
pnpm release:version
This automatically commits the version changes.
-
Publish to npm:
pnpm release:publish
This builds all packages, publishes them, and pushes git tags.
- Never use
pnpm publishdirectly - it won't handle workspace protocols correctly - All @prism-lang/* packages use fixed versioning - they move together
- Changesets automatically handles peer dependency version updates
- The
workspace:*protocol is used for local development and automatically replaced during publishing
Users: Install our packages with any package manager (npm, yarn, pnpm)
npm install @prism-lang/core # Works with npm, yarn, or pnpm!Contributors: Development requires pnpm for workspace management
pnpm install # Must use pnpm for developmentprism/
βββ packages/
β βββ prism-core/ # Core language implementation
β βββ prism-confidence/ # Confidence extraction library
β βββ prism-llm/ # LLM provider integrations
βββ apps/
β βββ cli/ # Command-line interface
β βββ repl/ # Interactive REPL
βββ examples/ # Example Prism programs
βββ docs/ # Documentation
βββ pnpm-workspace.yaml # pnpm workspace configuration
βββ turbo.json # Turborepo configuration
- Getting Started - Quick start guide
- Language Guide - Complete language reference
- API Reference - All functions and operators
- Confidence Extraction - Using @prism-lang/confidence
- Examples - Real-world usage patterns
code = read_file("user_submission.py")
safety = llm("Analyze for vulnerabilities: " + code)
uncertain if (safety) {
high {
deploy_to_production()
log("Deployed with confidence: " + (<~ safety))
}
medium {
results = run_sandboxed_tests(code)
if (results.pass) { deploy_to_staging() }
}
low {
send_to_security_team(code, safety)
}
}
question = "Will it rain tomorrow?"
// Get predictions from multiple sources
weather_api = fetch_weather_api() ~> 0.8
model1 = llm(question, model: "claude") ~> 0.9
model2 = llm(question, model: "gemini") ~> 0.85
local_sensors = analyze_pressure() ~> 0.7
// Combine predictions with confidence weighting
consensus = (weather_api ~+ model1 ~+ model2 ~+ local_sensors) ~/ 4
uncertain if (consensus) {
high { "Definitely bring an umbrella! β" }
medium { "Maybe pack a raincoat π§₯" }
low { "Enjoy the sunshine! βοΈ" }
}
We welcome contributions! See our Contributing Guide for details.
- Language features and operators
- Confidence extraction strategies
- LLM provider integrations
- Documentation and examples
- Testing and benchmarks
MIT - See LICENSE for details.
Built with β€οΈ for the uncertain future of programming
Report Bug β’ Request Feature β’ Join Discussion