You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+31-19
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ You can configure it to use any LLM, such as [Ollama](https://ollama.com/) or [C
14
14
15
15
## Example
16
16
17
-
```sh
17
+
```
18
18
#!/usr/bin/env llmscript
19
19
20
20
Create an output directory, `output`.
@@ -25,7 +25,7 @@ For every PNG file in `input`:
25
25
26
26
Running it with a directory of PNG files would look like this:
27
27
28
-
```shell
28
+
```
29
29
$ ./convert-pngs
30
30
Creating output directory
31
31
Convering input/1.png
@@ -37,38 +37,50 @@ Running pngcrush on output/3.png
37
37
Done!
38
38
```
39
39
40
-
<details>
41
-
<summary>Show intermediate steps</summary>
40
+
## Prerequisites
41
+
42
+
-[Go](https://go.dev/) (1.22 or later)
43
+
- One of:
44
+
-[Ollama](https://ollama.com/) running locally
45
+
- A [Claude](https://www.anthropic.com/claude) API key
46
+
- An [OpenAI](https://openai.com/) API key
42
47
43
-
# TODO
48
+
## Installation
49
+
50
+
```
51
+
go install github.com/statico/llmscript@latest
52
+
```
44
53
45
-
</details>
54
+
By default, llmscript will use Ollama with the `llama3.2` model. You can configure this by creating a config file with the `llmscript --write-config` command to create a config file in `~/.config/llmscript/config.yaml` which you can edit. You can also use command-line args (see below).
46
55
47
56
## How it works
48
57
49
58
Given a script description written in natural language, llmscript works by:
50
59
51
-
1. Generating two scripts:
52
-
- The feature script that implements the functionality
53
-
- A test script that verifies the feature script works correctly
54
-
2. Making both scripts executable
55
-
3. Running the test script, which will:
56
-
- Set up any necessary test environment
57
-
- Run the feature script
58
-
- Verify the output and state
59
-
4. If the test fails, the LLM will fix both scripts and try again
60
-
5. Once tests pass, the scripts are cached for future use (in `~/.config/llmscript/cache`)
60
+
1. Generating a feature script that implements the functionality
61
+
2. Generating a test script that verifies the feature script works correctly
62
+
3. Running the test script to verify the feature script works correctly, fixing the feature script if necessary, possibly going back to step 1 if the test script fails too many times
63
+
4. Caching the scripts for future use
64
+
5. Running the feature script with any additional arguments you provide
61
65
62
-
For example, if you write "Print hello world", llmscript might generate:
66
+
For example, given a simple hello world script:
67
+
68
+
```
69
+
#!/usr/bin/env llmscript
70
+
71
+
Print hello world
72
+
```
73
+
74
+
llmscript might generate the following feature script:
0 commit comments