Skip to content

hellogreencow/firesafety

Repository files navigation

Fire Safety AI Assistant

A sophisticated chat bot application designed for fire safety analysis, capable of integrating with various local AI models. The application provides comprehensive support for fire safety documentation, inspections, and compliance checks.

Features

  • πŸ€– Support for multiple model backends:
    • Ollama (recommended)
    • LM Studio
    • Custom endpoints
  • 🎨 Beautiful and responsive UI
  • ⚑ Real-time chat interface
  • πŸ› οΈ Configurable model parameters
  • πŸ“ Markdown support for rich text responses
  • πŸ“ File upload support (up to 500GB)
  • πŸ–₯️ Available as both web and desktop application
  • πŸ” Advanced fire safety features:
    • Automatic protocol generation
    • Document analysis and classification
    • Visual inspection support
    • Compliance verification
    • Report generation
    • Case study development

Prerequisites

  • Node.js 18 or higher
  • One of the following local model servers:

Installation

For Beginners (Step-by-Step Guide)

Step 1: Install Required Software

  1. Install Node.js:

    • Go to nodejs.org
    • Download and install the "LTS" (Long Term Support) version
    • Follow the installation wizard, accepting default settings
  2. Install Git:

    • Go to git-scm.com
    • Download and install Git for your operating system
    • Follow the installation wizard, accepting default settings

Step 2: Open Terminal/Command Prompt

  • On Windows:

    1. Press Windows + R on your keyboard
    2. Type cmd and press Enter
  • On Mac:

    1. Press Command + Space to open Spotlight
    2. Type terminal and press Enter
  • On Linux:

    1. Press Ctrl + Alt + T

Step 3: Clone and Set Up the Project

  1. Clone the repository:

    git clone https://github.com/hellogreencow/firesafety.git
  2. Navigate to the project folder:

    cd firesafety
  3. Install dependencies:

    npm install

Step 4: Start the Application

  1. Start the development server:

    npm run dev
  2. Open your web browser and go to:

    http://localhost:5555
    

For Experienced Users

  1. Clone the repository:

    git clone https://github.com/hellogreencow/firesafety.git
    cd firesafety
  2. Install dependencies:

    npm install

Running the Application

Web Version

To run the web version:

npm run dev

The web application will be available at:

Desktop Version

To run the desktop version in development mode:

npm run electron:dev

To build the desktop application:

npm run electron:build

The built application will be available in the dist-electron directory.

Building Desktop Apps

Windows

npm run electron:build -- --win

Creates:

  • Windows installer (.exe) in dist-electron
  • Portable executable
  • NSIS installer

macOS

npm run electron:build -- --mac

Creates:

  • DMG installer
  • App bundle (.app)
  • Universal binary (Intel + Apple Silicon)

Linux

npm run electron:build -- --linux

Creates:

  • AppImage
  • Debian package (.deb)
  • RPM package (.rpm)

Model Setup

Using Ollama (Recommended)

  1. Install Ollama:

    • macOS:
      brew install ollama
    • Linux:
      curl https://ollama.ai/install.sh | sh
    • Windows:
  2. Start Ollama:

    • macOS/Linux: Ollama starts automatically after installation
    • Windows: Launch Ollama from the Start Menu
  3. Pull the Llava model:

    ollama pull llava
  4. In the Fire Safety AI Assistant:

    • Set Model Type to "Ollama"
    • Endpoint will automatically be set to "http://localhost:11434"
    • Select "llava" from the model dropdown

Using LM Studio

  1. Download and install LM Studio from lmstudio.ai
  2. Launch LM Studio
  3. Download your desired model
  4. Start the local server (default port: 1234)
  5. In the Fire Safety AI Assistant:

Using Custom Endpoints

For other local model setups:

  1. Ensure your model server is running and accessible
  2. In the Fire Safety AI Assistant:
    • Set Model Type to "Custom"
    • Set Endpoint to your model's API endpoint
    • Configure any additional parameters as needed

Model Configuration

Basic Settings

  • Model Type: Choose between Ollama, LM Studio, or Custom
  • Endpoint: The URL where your model server is running
  • Model Name: The name of the model to use (if applicable)

Advanced Parameters

  • Temperature: Controls response randomness (0.0 - 2.0)
  • Max Tokens: Maximum length of generated responses
  • Top P: Nucleus sampling threshold (0.0 - 1.0)
  • Frequency Penalty: Reduces repetition (-2.0 to 2.0)
  • Presence Penalty: Encourages new topics (-2.0 to 2.0)

File Upload Support

The application supports file uploads for analysis:

  • Drag and drop files into the chat
  • Click the upload button to select files
  • Maximum file size: 500GB per file
  • Supports various file types for analysis

Fire Safety Features

1. Protocol Generation

  • Automatic maintenance protocol creation
  • Fire protection report generation
  • Escape route documentation
  • Inspection checklists

2. Document Analysis

  • AI-powered document classification
  • Key information extraction
  • Compliance requirement identification
  • Summary generation

3. Visual Inspections

  • Image analysis for safety violations
  • Deficiency documentation
  • Recommendation generation
  • Progress tracking

4. Compliance Checks

  • Regulation verification
  • Standard compliance
  • Gap analysis
  • Adjustment recommendations

5. Report Generation

  • Executive summaries
  • Detailed inspection reports
  • Case studies
  • Presentation materials

Development

Project Structure

β”œβ”€β”€ electron/
β”‚   └── main.js              # Electron main process
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”œβ”€β”€ Chat.tsx         # Main chat interface
β”‚   β”‚   └── ModelSelector.tsx # Model configuration UI
β”‚   β”œβ”€β”€ types/
β”‚   β”‚   β”œβ”€β”€ index.ts         # TypeScript interfaces
β”‚   β”‚   └── prompts.ts       # System prompts
β”‚   β”œβ”€β”€ lib/
β”‚   β”‚   └── db.ts           # Database operations
β”‚   β”œβ”€β”€ App.tsx             # Main application component
β”‚   └── main.tsx            # Application entry point
└── package.json

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Artificial Intelligence-powered AI Fire Safety Assistant

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published