A sophisticated chat bot application designed for fire safety analysis, capable of integrating with various local AI models. The application provides comprehensive support for fire safety documentation, inspections, and compliance checks.
- π€ Support for multiple model backends:
- Ollama (recommended)
- LM Studio
- Custom endpoints
- π¨ Beautiful and responsive UI
- β‘ Real-time chat interface
- π οΈ Configurable model parameters
- π Markdown support for rich text responses
- π File upload support (up to 500GB)
- π₯οΈ Available as both web and desktop application
- π Advanced fire safety features:
- Automatic protocol generation
- Document analysis and classification
- Visual inspection support
- Compliance verification
- Report generation
- Case study development
- Node.js 18 or higher
- One of the following local model servers:
-
Install Node.js:
- Go to nodejs.org
- Download and install the "LTS" (Long Term Support) version
- Follow the installation wizard, accepting default settings
-
Install Git:
- Go to git-scm.com
- Download and install Git for your operating system
- Follow the installation wizard, accepting default settings
-
On Windows:
- Press
Windows + Ron your keyboard - Type
cmdand press Enter
- Press
-
On Mac:
- Press
Command + Spaceto open Spotlight - Type
terminaland press Enter
- Press
-
On Linux:
- Press
Ctrl + Alt + T
- Press
-
Clone the repository:
git clone https://github.com/hellogreencow/firesafety.git
-
Navigate to the project folder:
cd firesafety -
Install dependencies:
npm install
-
Start the development server:
npm run dev
-
Open your web browser and go to:
http://localhost:5555
-
Clone the repository:
git clone https://github.com/hellogreencow/firesafety.git cd firesafety -
Install dependencies:
npm install
To run the web version:
npm run devThe web application will be available at:
- Development: http://localhost:5555
- Preview: http://localhost:5555
To run the desktop version in development mode:
npm run electron:devTo build the desktop application:
npm run electron:buildThe built application will be available in the dist-electron directory.
npm run electron:build -- --winCreates:
- Windows installer (.exe) in
dist-electron - Portable executable
- NSIS installer
npm run electron:build -- --macCreates:
- DMG installer
- App bundle (.app)
- Universal binary (Intel + Apple Silicon)
npm run electron:build -- --linuxCreates:
- AppImage
- Debian package (.deb)
- RPM package (.rpm)
-
Install Ollama:
- macOS:
brew install ollama
- Linux:
curl https://ollama.ai/install.sh | sh - Windows:
- Download from ollama.ai/download
- Run the installer
- Follow the setup wizard
- macOS:
-
Start Ollama:
- macOS/Linux: Ollama starts automatically after installation
- Windows: Launch Ollama from the Start Menu
-
Pull the Llava model:
ollama pull llava
-
In the Fire Safety AI Assistant:
- Set Model Type to "Ollama"
- Endpoint will automatically be set to "http://localhost:11434"
- Select "llava" from the model dropdown
- Download and install LM Studio from lmstudio.ai
- Launch LM Studio
- Download your desired model
- Start the local server (default port: 1234)
- In the Fire Safety AI Assistant:
- Set Model Type to "LM Studio"
- Set Endpoint to "http://localhost:1234/v1/chat/completions"
- Enter your model name if required
For other local model setups:
- Ensure your model server is running and accessible
- In the Fire Safety AI Assistant:
- Set Model Type to "Custom"
- Set Endpoint to your model's API endpoint
- Configure any additional parameters as needed
- Model Type: Choose between Ollama, LM Studio, or Custom
- Endpoint: The URL where your model server is running
- Model Name: The name of the model to use (if applicable)
- Temperature: Controls response randomness (0.0 - 2.0)
- Max Tokens: Maximum length of generated responses
- Top P: Nucleus sampling threshold (0.0 - 1.0)
- Frequency Penalty: Reduces repetition (-2.0 to 2.0)
- Presence Penalty: Encourages new topics (-2.0 to 2.0)
The application supports file uploads for analysis:
- Drag and drop files into the chat
- Click the upload button to select files
- Maximum file size: 500GB per file
- Supports various file types for analysis
- Automatic maintenance protocol creation
- Fire protection report generation
- Escape route documentation
- Inspection checklists
- AI-powered document classification
- Key information extraction
- Compliance requirement identification
- Summary generation
- Image analysis for safety violations
- Deficiency documentation
- Recommendation generation
- Progress tracking
- Regulation verification
- Standard compliance
- Gap analysis
- Adjustment recommendations
- Executive summaries
- Detailed inspection reports
- Case studies
- Presentation materials
βββ electron/
β βββ main.js # Electron main process
βββ src/
β βββ components/
β β βββ Chat.tsx # Main chat interface
β β βββ ModelSelector.tsx # Model configuration UI
β βββ types/
β β βββ index.ts # TypeScript interfaces
β β βββ prompts.ts # System prompts
β βββ lib/
β β βββ db.ts # Database operations
β βββ App.tsx # Main application component
β βββ main.tsx # Application entry point
βββ package.json
This project is licensed under the MIT License - see the LICENSE file for details.