Skip to content

Commit 437ff82

Browse files
committed
docs: Update README to improve clarity on usage examples, installation instructions, and prerequisites for llmscript
1 parent bd47c4f commit 437ff82

File tree

1 file changed

+31
-19
lines changed

1 file changed

+31
-19
lines changed

README.md

+31-19
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ You can configure it to use any LLM, such as [Ollama](https://ollama.com/) or [C
1414
1515
## Example
1616

17-
```sh
17+
```
1818
#!/usr/bin/env llmscript
1919
2020
Create an output directory, `output`.
@@ -25,7 +25,7 @@ For every PNG file in `input`:
2525

2626
Running it with a directory of PNG files would look like this:
2727

28-
```shell
28+
```
2929
$ ./convert-pngs
3030
Creating output directory
3131
Convering input/1.png
@@ -37,38 +37,50 @@ Running pngcrush on output/3.png
3737
Done!
3838
```
3939

40-
<details>
41-
<summary>Show intermediate steps</summary>
40+
## Prerequisites
41+
42+
- [Go](https://go.dev/) (1.22 or later)
43+
- One of:
44+
- [Ollama](https://ollama.com/) running locally
45+
- A [Claude](https://www.anthropic.com/claude) API key
46+
- An [OpenAI](https://openai.com/) API key
4247

43-
# TODO
48+
## Installation
49+
50+
```
51+
go install github.com/statico/llmscript@latest
52+
```
4453

45-
</details>
54+
By default, llmscript will use Ollama with the `llama3.2` model. You can configure this by creating a config file with the `llmscript --write-config` command to create a config file in `~/.config/llmscript/config.yaml` which you can edit. You can also use command-line args (see below).
4655

4756
## How it works
4857

4958
Given a script description written in natural language, llmscript works by:
5059

51-
1. Generating two scripts:
52-
- The feature script that implements the functionality
53-
- A test script that verifies the feature script works correctly
54-
2. Making both scripts executable
55-
3. Running the test script, which will:
56-
- Set up any necessary test environment
57-
- Run the feature script
58-
- Verify the output and state
59-
4. If the test fails, the LLM will fix both scripts and try again
60-
5. Once tests pass, the scripts are cached for future use (in `~/.config/llmscript/cache`)
60+
1. Generating a feature script that implements the functionality
61+
2. Generating a test script that verifies the feature script works correctly
62+
3. Running the test script to verify the feature script works correctly, fixing the feature script if necessary, possibly going back to step 1 if the test script fails too many times
63+
4. Caching the scripts for future use
64+
5. Running the feature script with any additional arguments you provide
6165

62-
For example, if you write "Print hello world", llmscript might generate:
66+
For example, given a simple hello world script:
67+
68+
```
69+
#!/usr/bin/env llmscript
70+
71+
Print hello world
72+
```
73+
74+
llmscript might generate the following feature script:
6375

6476
```bash
65-
# Feature script (script.sh)
6677
#!/bin/bash
6778
echo "Hello, world!"
6879
```
6980

81+
...and the following test script to test it:
82+
7083
```bash
71-
# Test script (test.sh)
7284
#!/bin/bash
7385
[ "$(./script.sh)" = "Hello, world!" ] || exit 1
7486
```

0 commit comments

Comments
 (0)