Skip to content
This repository has been archived by the owner on May 12, 2023. It is now read-only.

Commit

Permalink
updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Richard Guo authored and Richard Guo committed May 11, 2023
1 parent 463037e commit 8d8470d
Show file tree
Hide file tree
Showing 2 changed files with 89 additions and 76 deletions.
83 changes: 7 additions & 76 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,82 +1,13 @@
# PyGPT4All
# Pygpt4all

Official Python CPU inference for [GPT4All](https://github.com/nomic-ai/gpt4all) language models based on [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml)
We've moved Python bindings with the main gpt4all repo.

[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![PyPi version](https://badgen.net/pypi/v/pygpt4all)](https://pypi.org/project/pygpt4all/)
Future development, issues, and the like will be handled in the main repo.

<!-- TOC -->
This repo will be archived and set to read-only.

- [Installation](#installation)
- [Tutorial](#tutorial)
- [Model instantiation](#model-instantiation)
- [Simple generation](#simple-generation)
- [Interactive Dialogue](#interactive-dialogue)
- [API reference](#api-reference)
- [License](#license)
<!-- TOC -->
The main repo is here: https://github.com/nomic-ai/gpt4all

# Installation
The subdirectory with this repo is here: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python

```bash
pip install pygpt4all
```

# Tutorial

You will need first to download the model weights, you can find and download all the supported models from [here](https://github.com/nomic-ai/gpt4all-chat#manual-download-of-models).

### Model instantiation

Once the weights are downloaded, you can instantiate the models as follows:

- GPT4All model

```python
from pygpt4all import GPT4All

model = GPT4All('path/to/ggml-gpt4all-l13b-snoozy.bin')
```

- GPT4All-J model

```python
from pygpt4all import GPT4All_J

model = GPT4All_J('path/to/ggml-gpt4all-j-v1.3-groovy.bin')
```

### Simple generation

The `generate` function is used to generate new tokens from the `prompt` given as input:

```python
for token in model.generate("Tell me a joke ?\n"):
print(token, end='', flush=True)
```

### Interactive Dialogue

You can set up an interactive dialogue by simply keeping the `model` variable alive:

```python
while True:
try:
prompt = input("You: ")
if prompt == '':
continue
print(f"AI:", end='')
for token in model.generate(prompt):
print(f"{token}", end='', flush=True)
print()
except KeyboardInterrupt:
break
```

# API reference

You can check the [API reference documentation](https://nomic-ai.github.io/pygpt4all/) for more details.

# License

This project is licensed under the MIT [License](./LICENSE).
Thank you!
82 changes: 82 additions & 0 deletions old-README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# PyGPT4All

Official Python CPU inference for [GPT4All](https://github.com/nomic-ai/gpt4all) language models based on [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml)

[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![PyPi version](https://badgen.net/pypi/v/pygpt4all)](https://pypi.org/project/pygpt4all/)

<!-- TOC -->

- [Installation](#installation)
- [Tutorial](#tutorial)
- [Model instantiation](#model-instantiation)
- [Simple generation](#simple-generation)
- [Interactive Dialogue](#interactive-dialogue)
- [API reference](#api-reference)
- [License](#license)
<!-- TOC -->

# Installation

```bash
pip install pygpt4all
```

# Tutorial

You will need first to download the model weights, you can find and download all the supported models from [here](https://github.com/nomic-ai/gpt4all-chat#manual-download-of-models).

### Model instantiation

Once the weights are downloaded, you can instantiate the models as follows:

- GPT4All model

```python
from pygpt4all import GPT4All

model = GPT4All('path/to/ggml-gpt4all-l13b-snoozy.bin')
```

- GPT4All-J model

```python
from pygpt4all import GPT4All_J

model = GPT4All_J('path/to/ggml-gpt4all-j-v1.3-groovy.bin')
```

### Simple generation

The `generate` function is used to generate new tokens from the `prompt` given as input:

```python
for token in model.generate("Tell me a joke ?\n"):
print(token, end='', flush=True)
```

### Interactive Dialogue

You can set up an interactive dialogue by simply keeping the `model` variable alive:

```python
while True:
try:
prompt = input("You: ")
if prompt == '':
continue
print(f"AI:", end='')
for token in model.generate(prompt):
print(f"{token}", end='', flush=True)
print()
except KeyboardInterrupt:
break
```

# API reference

You can check the [API reference documentation](https://nomic-ai.github.io/pygpt4all/) for more details.

# License

This project is licensed under the MIT [License](./LICENSE).

0 comments on commit 8d8470d

Please sign in to comment.