Skip to content

HaruHunab1320/Prism-TS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌟 Prism: Programming with Confidence in an Uncertain World

npm version TypeScript License: MIT

A programming language where uncertainty is a first-class citizen.

Documentation β€’ Quick Start β€’ Packages β€’ Examples β€’ Contributing


πŸ“¦ Important: Package Migration

If you're using prism-uncertainty, please migrate to @prism-lang/core

npm uninstall prism-uncertainty
npm install @prism-lang/core

πŸš€ Quick Start

Install using your preferred package manager:

# npm
npm install @prism-lang/core
npm install @prism-lang/confidence  # optional

# yarn
yarn add @prism-lang/core
yarn add @prism-lang/confidence     # optional

# pnpm
pnpm add @prism-lang/core
pnpm add @prism-lang/confidence     # optional

# Install CLI globally
npm install -g @prism-lang/cli      # or yarn/pnpm

🎨 VS Code Extension

Get syntax highlighting and language support for VS Code:

# Download and install the extension
curl -L https://github.com/HaruHunab1320/Prism-TS/releases/download/v0.1.0/prism-lang-0.1.0.vsix -o prism-lang.vsix
code --install-extension prism-lang.vsix
rm prism-lang.vsix

Features:

  • ✨ Full syntax highlighting for all Prism features
  • 🎨 Semantic colors for confidence operators
  • 🌈 Light and dark themes optimized for Prism
  • πŸ“ Auto-indentation and bracket matching

Your First Prism Program

Using the CLI (Recommended)

Create a file hello.prism:

// hello.prism
name = "World"
greeting = llm("Create a friendly greeting for ${name}")

console.log(greeting)

// Make decisions based on confidence
response = llm("Should we proceed?") ~> 0.75
uncertain if (response) {
  high { console.log("βœ… Proceeding with confidence!") }
  medium { console.log("⚠️ Proceeding with caution...") }
  low { console.log("❌ Too uncertain, aborting.") }
}

Run it:

# Execute a Prism file
prism run hello.prism

# Or use the REPL for interactive development
prism

# Evaluate expressions directly
prism eval "2 + 2 ~> 0.99"

Using as a TypeScript Library

import { parse, createRuntime } from '@prism-lang/core';

const code = `
  // AI responses with confidence
  analysis = llm("Is this secure?") ~> 0.85
  
  // Confidence-aware decisions
  uncertain if (analysis) {
    high { deploy() }
    medium { review() }
    low { abort() }
  }
`;

const ast = parse(code);
const runtime = createRuntime();
const result = await runtime.execute(ast);

πŸ“š Packages

Prism is organized as a monorepo with focused, modular packages:

Package Description Version
@prism-lang/core Core language implementation (parser, runtime, types) npm
@prism-lang/confidence Confidence extraction from LLMs and other sources npm
@prism-lang/llm LLM provider integrations (Claude, Gemini, OpenAI) npm
@prism-lang/cli Command-line interface npm
@prism-lang/repl Interactive REPL npm

✨ Why Prism?

Every AI application deals with uncertainty, but traditional languages pretend it doesn't exist. Prism makes uncertainty explicit and manageable.

🎯 Uncertainty as a First-Class Citizen

// Traditional approach: Uncertainty is hidden
result = llm_call()
if (result) { /* hope for the best */ }

// Prism: Uncertainty is explicit
result = llm_call() ~> 0.7
uncertain if (result) {
  high { proceed_with_confidence() }
  medium { add_human_review() }
  low { need_more_data() }
}

🧠 Built for the AI Era

// Ensemble multiple models with confidence
claude_says = llm("Analyze risk", model: "claude") ~> 0.9
gpt_says = llm("Analyze risk", model: "gpt4") ~> 0.8
gemini_says = llm("Analyze risk", model: "gemini") ~> 0.7

// Automatically use highest confidence result
best_analysis = claude_says ~||> gpt_says ~||> gemini_says

// Confidence-aware null coalescing
decision = best_analysis ~?? fallback_analysis ~?? "manual_review"

πŸ“Š Confidence Extraction Made Easy

With @prism-lang/confidence:

import { confidence } from '@prism-lang/confidence';

// Extract confidence from any LLM response
const response = await llm("Is this safe?");
const conf = await confidence.extract(response);

// Multiple strategies available
const ensemble = await confidence.fromConsistency(
  () => llm("Analyze this"),
  { samples: 5 }
);

// Domain-specific calibration
const calibrated = await confidence.calibrators.security
  .calibrate(conf, { type: 'sql_injection' });

πŸ”§ Language Features

Confidence Operators

  • ~> - Assign confidence
  • <~ - Extract confidence
  • ~*, ~/, ~+, ~- - Confidence-preserving arithmetic
  • ~==, ~!=, ~>, ~< - Confidence comparisons
  • ~&&, ~|| - Confidence logical operations
  • ~?? - Confidence null coalescing
  • ~||> - Parallel confidence (ensemble)

Control Flow

// Uncertain conditionals
uncertain if (measurement) {
  high { /* >70% confidence */ }
  medium { /* 30-70% confidence */ }
  low { /* <30% confidence */ }
}

// Uncertain loops
uncertain while (condition) {
  confident { /* >70% */ }
  attempt { /* 30-70% */ }
  abort { /* <30% */ }
}

Modern Language Features

  • First-class functions and lambdas
  • Pattern matching with uncertainty
  • Async/await with confidence propagation
  • Destructuring with confidence preservation
  • Type checking with typeof and instanceof

πŸ› οΈ Development

Note: We use pnpm and Turborepo for development. You'll need pnpm installed to contribute.

# Clone the repository
git clone https://github.com/cjpais/prism.git
cd prism

# Install pnpm if you don't have it
npm install -g pnpm

# Install dependencies
pnpm install

# Build all packages
pnpm build

# Run tests
pnpm test

# Start development mode
pnpm dev

πŸ“¦ Publishing Packages

We use changesets to manage versioning and publishing. This ensures all packages stay in sync and peer dependencies are correctly managed.

Release Workflow

  1. Make your changes and commit them

  2. Create a changeset to describe your changes:

    pnpm changeset
    # or
    pnpm release:create
    • Select which packages changed
    • Choose the bump type (patch/minor/major)
    • Write a description for the changelog
  3. Check what will be released:

    pnpm release:check
  4. Version the packages (updates package.json files and changelogs):

    pnpm release:version

    This automatically commits the version changes.

  5. Publish to npm:

    pnpm release:publish

    This builds all packages, publishes them, and pushes git tags.

Important Notes

  • Never use pnpm publish directly - it won't handle workspace protocols correctly
  • All @prism-lang/* packages use fixed versioning - they move together
  • Changesets automatically handles peer dependency version updates
  • The workspace:* protocol is used for local development and automatically replaced during publishing

For Users vs Contributors

Users: Install our packages with any package manager (npm, yarn, pnpm)

npm install @prism-lang/core    # Works with npm, yarn, or pnpm!

Contributors: Development requires pnpm for workspace management

pnpm install              # Must use pnpm for development

Repository Structure

prism/
β”œβ”€β”€ packages/
β”‚   β”œβ”€β”€ prism-core/        # Core language implementation
β”‚   β”œβ”€β”€ prism-confidence/  # Confidence extraction library
β”‚   └── prism-llm/         # LLM provider integrations
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ cli/               # Command-line interface
β”‚   └── repl/              # Interactive REPL
β”œβ”€β”€ examples/              # Example Prism programs
β”œβ”€β”€ docs/                  # Documentation
β”œβ”€β”€ pnpm-workspace.yaml    # pnpm workspace configuration
└── turbo.json            # Turborepo configuration

πŸ“– Documentation

πŸ“š Full Documentation

🌟 Examples

AI Safety Analysis

code = read_file("user_submission.py")
safety = llm("Analyze for vulnerabilities: " + code)

uncertain if (safety) {
  high { 
    deploy_to_production()
    log("Deployed with confidence: " + (<~ safety))
  }
  medium {
    results = run_sandboxed_tests(code)
    if (results.pass) { deploy_to_staging() }
  }
  low {
    send_to_security_team(code, safety)
  }
}

Multi-Model Consensus

question = "Will it rain tomorrow?"

// Get predictions from multiple sources
weather_api = fetch_weather_api() ~> 0.8
model1 = llm(question, model: "claude") ~> 0.9  
model2 = llm(question, model: "gemini") ~> 0.85
local_sensors = analyze_pressure() ~> 0.7

// Combine predictions with confidence weighting
consensus = (weather_api ~+ model1 ~+ model2 ~+ local_sensors) ~/ 4

uncertain if (consensus) {
  high { "Definitely bring an umbrella! β˜”" }
  medium { "Maybe pack a raincoat πŸ§₯" }
  low { "Enjoy the sunshine! β˜€οΈ" }
}

🀝 Contributing

We welcome contributions! See our Contributing Guide for details.

Key areas for contribution:

  • Language features and operators
  • Confidence extraction strategies
  • LLM provider integrations
  • Documentation and examples
  • Testing and benchmarks

πŸ“„ License

MIT - See LICENSE for details.


Built with ❀️ for the uncertain future of programming

Report Bug β€’ Request Feature β€’ Join Discussion

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •