This project is a Streamlit-based web application that implements a personal gym chatbot. It uses the Ollama API for local AI processing and provides a user-friendly interface for interacting with different language models.
- Dark-themed user interface with custom CSS styling
- Configurable model selection (llama2, mistral, codellama)
- Adjustable temperature setting for AI responses
- Chat history persistence using Streamlit session state
- Real-time chat interface with user and assistant messages
- Clear chat functionality
- Error handling for API requests
- Python 3.8+
- Streamlit
- Requests library
- Ollama server running locally on port 11434
- Install required Python packages:
pip install streamlit requests- Ensure Ollama is installed and running locally:
- Follow Ollama's official documentation for installation
- Start the Ollama server:
ollama serve
project_directory/
│
├── app.py # Main application code
├── README.md # This documentation file
- Run the Streamlit app:
streamlit run app.py-
Access the application through your web browser (typically at
http://localhost:8501) -
Configure settings in the sidebar:
- Select desired model
- Adjust temperature slider
- View application information
- Interact with the chatbot:
- Enter messages in the text input field
- Click "Send" to get AI responses
- Use "Clear Chat" to reset the conversation
streamlit: For creating the web interfacerequests: For making API calls to Ollamajson: For handling JSON data
st.set_page_config(
page_title="AI Chatbot",
page_icon="🤖",
layout="wide"
)Sets up the Streamlit page with a title, icon, and wide layout.
Custom CSS is applied using st.markdown with unsafe_allow_html=True to create a dark theme and style various UI components.
if 'messages' not in st.session_state:
st.session_state.messages = []Initializes chat history storage using Streamlit's session state.
with st.sidebar:
st.title("⚙️ Configuration")
model = st.selectbox(...)
temperature = st.slider(...)Creates a sidebar for model selection and temperature adjustment.
- Displays the chat history in a container
- Shows user and assistant messages with proper formatting
- Uses markdown for styling
def query_ollama(prompt, model_name, temp):
response = requests.post("http://localhost:11434/api/generate", ...)Handles API calls to the local Ollama server for generating responses.
- Text input for user messages
- Send button to trigger AI response
- Clear chat button to reset conversation
- Spinner during API processing
- Checks for valid user input
- Handles API request errors
- Displays warning messages when appropriate
- Requires local Ollama server
- Limited to supported models (llama2, mistral, codellama)
- No persistent storage for chat history
- Basic error handling for API failures
- Add persistent storage for chat history
- Implement message streaming
- Add support for more models
- Enhance error handling
- Add conversation export functionality
- Ensure Ollama server is running on port 11434
- Verify model availability in Ollama
- Check internet connection for package installation
- Monitor Streamlit logs for errors