Skip to content

Commit

Permalink
Cleanup and fixes after botched merge
Browse files Browse the repository at this point in the history
  • Loading branch information
francip committed Apr 8, 2023
1 parent 3118412 commit 9f75921
Show file tree
Hide file tree
Showing 6 changed files with 27 additions and 220 deletions.
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ This README will cover the following:

* [How to use the script](#how-to-use)

* [Suported Models](#supported-models)

* [Warning about running the script continuously](#continous-script-warning)
# How It Works<a name="how-it-works"></a>
The script works by running an infinite loop that does the following steps:
Expand Down Expand Up @@ -42,6 +44,18 @@ To use the script, you will need to follow these steps:

All optional values above can also be specified on the command line.

# Supported Models<a name="supported-models"></a>

This script works with all OpenAI models, as well as Llama through Llama.cpp. Default model is **gpt-3.5-turbo**. To use a different model, specify it through OPENAI_API_MODEL or use the command line.

## Llama

Download the latest version of [Llama.cpp](https://github.com/ggerganov/llama.cpp) and follow instructions to make it. You will also need the Llama model weights.

- **Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this respository, including in issues, discussions or pull requests. They will be immediately deleted.**

After that link `llama/main` to llama.cpp/main and `models` to the folder where you have the Llama model weights. Then run the script with `OPENAI_API_MODEL=llama` or `-l` argument.

# Warning<a name="continous-script-warning"></a>
This script is designed to be run continuously as part of a task management system. Running this script continuously can result in high API usage, so please use it responsibly. Additionally, the script requires the OpenAI and Pinecone APIs to be set up correctly, so make sure you have set up the APIs before running the script.

Expand Down
13 changes: 7 additions & 6 deletions babyagi.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#!/usr/bin/env python3
import os
import subprocess
import time
from collections import deque
from typing import Dict, List
Expand Down Expand Up @@ -118,7 +119,12 @@ def openai_call(
):
while True:
try:
if not model.startswith("gpt-"):
if model.startswith("llama"):
# Spawn a subprocess to run llama.cpp
cmd = cmd = ["llama/main", "-p", prompt]
result = subprocess.run(cmd, shell=True, stderr=subprocess.DEVNULL, stdout=subprocess.PIPE, text=True)
return result.stdout.strip()
elif not model.startswith("gpt-"):
# Use completion API
response = openai.Completion.create(
engine=model,
Expand All @@ -130,11 +136,6 @@ def openai_call(
presence_penalty=0,
)
return response.choices[0].text.strip()
elif model.startswith("llama"):
# Spawn a subprocess to run llama.cpp
cmd = cmd = ["llama/main", "-p", prompt]
result = subprocess.run(cmd, shell=True, stderr=subprocess.DEVNULL, stdout=subprocess.PIPE, text=True)
return result.stdout.strip()
else:
# Use chat completion API
messages = [{"role": "user", "content": prompt}]
Expand Down
8 changes: 6 additions & 2 deletions extensions/argparseext.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,13 @@ def parse_arguments():
initial task description. must be quoted.
if not specified, get initial_task from environment.
''', default=os.getenv("INITIAL_TASK", os.getenv("FIRST_TASK", "")))
parser.add_argument('-4', '--gpt-4', dest='openai_api_model', action='store_const', const="gpt-4", help='''
use GPT-4 instead of the default GPT-3 model.
group = parser.add_mutually_exclusive_group()
group.add_argument('-4', '--gpt-4', dest='openai_api_model', action='store_const', const="gpt-4", help='''
use GPT-4 instead of the default model.
''', default=os.getenv("OPENAI_API_MODEL", "gpt-3.5-turbo"))
group.add_argument('-l', '--llama', dest='openai_api_model', action='store_const', const="llama", help='''
use LLaMa instead of the default model. Requires llama.cpp.
''')
# This will parse -e again, which we want, because we need
# to load those in the main file later as well
parser.add_argument('-e', '--env', nargs='+', help='''
Expand Down
102 changes: 0 additions & 102 deletions extensions/argsparser.py

This file was deleted.

41 changes: 0 additions & 41 deletions extensions/ray_objectives.py

This file was deleted.

69 changes: 0 additions & 69 deletions extensions/ray_tasks.py

This file was deleted.

0 comments on commit 9f75921

Please sign in to comment.