Skip to content

Displayer226/pls

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

pls

Just say what you want. In your terminal.

InstallUsageProvidersConfig

PyPI Python License

pls demo

$ pls "compress all PNGs in this folder"

╭───────────────────────────────────────────────────────────╮
│ find . -name "*.png" -exec pngquant --quality=65-80 {} \; │
╰───────────────────────────────────────────────────────────╯

 Run it? (Y/n)

✓ Done.

Install

pip install pls-sh

That's it. The command is pls.

Usage

pls "find files bigger than 100MB"
pls "kill whatever is using port 3000"
pls "convert video.mp4 to gif"
pls "show disk usage sorted by size"
pls "rename all .jpeg files to .jpg"

Works offline by default with Ollama. No API key, no internet, no telemetry.

Flags

pls "do something" --explain        # also explains what the command does
pls "do something" --yes            # skip confirmation, just run it
pls "do something" --dry-run        # show command but don't run it
pls "do something" --provider openai
pls "do something" --model gpt-4o
pls --last                          # show the last generated command
echo "do something" | pls           # pipe from stdin

Safety

pls flags dangerous commands before running them. Stuff like rm -rf, chmod 777, dd, piping random scripts into bash — all gets highlighted in red with a warning. Dangerous commands flip the confirmation to opt-in (y/N instead of Y/n).

Press e at the confirmation prompt to edit the command before running it.

Providers

Ollama is the default. Runs locally, no account needed.

# make sure ollama is running
ollama serve
ollama pull qwen3.5:2b

pls "list all docker containers"
# just works

LM Studio, llama.cpp, or any OpenAI-compatible server:

# LM Studio (runs on port 1234 by default)
pls config set default provider lmstudio

# any OpenAI-compatible endpoint (llama.cpp, vLLM, OpenRouter, etc.)
pls config set custom url http://localhost:8080
pls config set custom model my-model
pls config set custom api_key sk-...          # optional
pls config set default provider custom

OpenAI and Anthropic if you want cloud models:

# either set env vars
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...

# or save them in config
pls config set openai api_key sk-...
pls config set anthropic api_key sk-ant-...

# then use them
pls "do something" --provider openai
pls "do something" --provider anthropic

# or set a default
pls config set default provider anthropic

Config

Config lives in ~/.config/pls/config.toml.

pls config show          # see current config
pls config set ...       # change a value
pls config reset         # back to defaults

How it works

  1. You type what you want in plain English (or any language, really)
  2. pls grabs context — your OS, shell, what's in the current directory
  3. Sends that + your request to the LLM
  4. Shows you the command, asks for confirmation
  5. Runs it

No history stored, no data sent anywhere (unless you use OpenAI/Anthropic).

License

MIT

About

Just say what you want. In your terminal.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages

  • Python 100.0%