Skip to content

MatGros/app-quick-shortcut-AI-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

⚑ Quick Shortcut AI LLM Assistant

A lightweight, ultra-fast Windows native application for AI-powered text assistance with global keyboard shortcuts.

Python 3.10+ Tests Passing Coverage License


🎯 What is This?

A modern Windows assistant that brings AI to your fingertips with one keyboard shortcut:

  1. Ctrl + Right-Click anywhere
  2. Choose an action (Summarize, Translate, Custom Prompt, etc.)
  3. Get AI response instantly with streaming

Supports: Ollama (local), OpenAI, Anthropic Claude, and more.


✨ Key Features

Feature Status Details
πŸ”Œ Multi-LLM Support βœ… Phase 1 Ollama, OpenAI, Anthropic (pluggable)
⌨️ Global Shortcuts 🚧 Phase 2 Ctrl+Right-Click menu system
πŸ’¬ Streaming Chat 🚧 Phase 2 Real-time token streaming
πŸ“Έ Screenshot + Vision πŸ“‹ Phase 3 AI analyze images natively
πŸ’Ύ History πŸ“‹ Phase 4 SQLite-based conversation storage
🎨 Modern UI 🚧 Phase 2 Dark/Light themes, smooth animations
⚑ Ultra-Fast 🎯 Target < 2s startup, < 100ms menu latency

πŸš€ Quick Start

Prerequisites

  • Windows 10 (1809+) or Windows 11
  • Python 3.10+
  • One LLM: Ollama (local) OR OpenAI/Anthropic API key

Installation (Development)

# 1. Clone repo
git clone <repo-url>
cd app-quick-shortcut-AI-LLM

# 2. Create virtual environment
python -m venv venv
.\venv\Scripts\activate

# 3. Install dependencies
pip install -r requirements.txt

# 4. Run tests to verify setup
pytest tests/ -v

First Run

# Coming in Phase 2 - For now, tests only
python -m pytest tests/ -v

πŸ“‹ Project Status

βœ… Phase 1: Foundation (COMPLETE)

  • LLM Provider abstraction (Ollama, OpenAI, Anthropic)
  • Configuration service (JSON persistence)
  • Health checks (startup validation)
  • 43 unit tests (87% coverage)
  • Ready for UI layer

βœ… Phase 2: UI Core (COMPLETE - Unit Tests)

  • Global keyboard hooks (Ctrl+Right-Click) - unit tested
  • Floating context menu (6 actions) - unit tested
  • Response streaming window - unit tested
  • Keyboard shortcuts manager - unit tested
  • System tray icon - unit tested
  • Main app orchestration - unit tested
  • 213 unit tests (89% coverage)
  • Components signal-connected
  • ⚠️ No integration/visual tests yet (moved to Phase 3)

🚧 Phase 3: LLM Integration & Testing (IN PROGRESS - UAT PAUSED)

Status: Code complete (365 tests passing) but critical UI bugs blocking UAT

  • Real LLM streaming (Task #1 - done)
  • Clipboard manager (Task #1 - done)
  • Markdown rendering (Task #3 - done)
  • Settings dialog (Task #4 - done, but closes app ❌)
  • Auto-paste (Task #5 - done)
  • Integration Tests (Task #6 - 365 tests passing βœ…)
  • πŸ”΄ Chat streaming (works but freezes UI ❌)
  • πŸ”΄ Floating context menu (appears but unclickable ❌)
  • πŸ”΄ Status icon (shows wrong state ❌)

Blocking Issues (Details here):

  1. Settings dialog closes entire app
  2. Status icon stays red with valid config
  3. Chat UI freezes during LLM response
  4. Menu items not clickable
  5. Windows system menu appears alongside app menu

Next: Fix 5 critical issues before continuing UAT

πŸ“‹ Phase 4: Polish (PLANNED)

  • SQLite history with search
  • Toast notifications
  • Dark/Light theme system (QSS)
  • Performance profiling and optimization

πŸ“‹ Phase 5: Release (PLANNED)

  • Packaging with Nuitka (< 50MB exe, < 2s startup)
  • Final documentation & user guide
  • Release on GitHub

πŸ§ͺ Testing

Run All Tests

# With coverage report
pytest tests/ -v --cov=src --cov-report=term-missing

# Summary
pytest tests/ -v

Expected Output

βœ… test_llm_provider.py     : 18 tests passing
βœ… test_config_service.py   : 14 tests passing
βœ… test_health_check.py     : 11 tests passing
======================== 43 passed in 0.56s ========================
Coverage: 87% ⭐

Test Coverage by Module

llm_provider.py    : 89% ⭐
config_service.py  : 93% ⭐
health_check.py    : 85% ⭐

For more details, see PHASE_1_REVIEW.md and VSCODE_TESTING_GUIDE.md.


πŸ“ Project Structure

app-quick-shortcut-ai-llm/
β”œβ”€β”€ src/                          # Application code
β”‚   β”œβ”€β”€ core/                      # Core logic
β”‚   β”‚   β”œβ”€β”€ llm_provider.py        # Abstract LLM interface + Factory
β”‚   β”‚   β”œβ”€β”€ ollama_provider.py     # Ollama implementation
β”‚   β”‚   β”œβ”€β”€ openai_provider.py     # OpenAI implementation
β”‚   β”‚   β”œβ”€β”€ anthropic_provider.py  # Anthropic implementation
β”‚   β”‚   └── config_service.py      # Configuration (Singleton)
β”‚   β”œβ”€β”€ ui/                        # UI components (Phase 2+)
β”‚   β”œβ”€β”€ services/                  # Services
β”‚   β”‚   └── health_check.py        # Startup validation
β”‚   └── utils/                     # Utilities
β”œβ”€β”€ tests/                         # Test suite
β”‚   β”œβ”€β”€ test_core/                 # Core tests
β”‚   └── test_services/             # Service tests
β”œβ”€β”€ assets/                        # Icons, styles (Phase 2+)
β”œβ”€β”€ docs/                          # Documentation
β”œβ”€β”€ requirements.txt               # Python dependencies
β”œβ”€β”€ pyproject.toml                 # Project metadata
β”œβ”€β”€ pytest.ini                     # Pytest configuration
β”œβ”€β”€ SPEC.md                        # Technical specification
β”œβ”€β”€ TODO.md                        # Implementation roadmap
β”œβ”€β”€ LICENSE                        # GPL-3.0
└── README.md                      # This file

βš™οΈ Configuration

Location

%APPDATA%\QuickShortcutAI\config.json

Example Config

{
  "providers": [
    {
      "id": "ollama-local",
      "type": "ollama",
      "base_url": "http://localhost:11434",
      "enabled": true
    }
  ],
  "appearance": {
    "theme": "dark",
    "font_family": "Segoe UI",
    "font_size": 11
  },
  "behavior": {
    "auto_paste_enabled": false,
    "auto_paste_delay_ms": 100
  }
}

See SPEC.md for complete configuration schema.


πŸ“š Documentation


πŸ”§ Development

Code Style

  • Formatter: black (auto-formatted)
  • Linter: ruff
  • Type Hints: Where relevant (not strict)
  • Docstrings: Google style

VS Code Setup

Recommended extensions:
- Python (Microsoft)
- Pylance (Microsoft)
- Python Test Explorer (Little Fox Team)

Open command palette (Ctrl+Shift+P):

"Python: Select Interpreter" β†’ Choose venv interpreter
"Test: Focus on Test Explorer View" β†’ See all tests

🎯 Performance Targets

Metric Target Current
App startup < 2s TBD (Phase 2)
Menu latency < 100ms TBD (Phase 2)
Streaming response Smooth 60fps TBD (Phase 2)
Memory (idle) < 150MB TBD (Phase 2)
Exe size < 50MB TBD (Phase 5)

πŸ› Troubleshooting

Tests not found in VS Code?

# 1. Refresh test explorer (icon in VS Code)
# 2. Ensure pytest installed in venv:
pip install pytest pytest-cov
# 3. Restart VS Code

Import errors?

# Verify PYTHONPATH includes src/
export PYTHONPATH=$PYTHONPATH:$(pwd)/src
python -m pytest tests/ -v

Need more details?

See VSCODE_TESTING_GUIDE.md for comprehensive troubleshooting.


🀝 Contributing

This is a personal project, but improvements are welcome:

  1. Fork the repo
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Add tests for your changes
  4. Ensure all tests pass (pytest tests/ -v)
  5. Submit pull request

πŸ“„ License

This project is licensed under GPL-3.0 - see LICENSE file for details.


🚦 Next Steps

For Users

  1. βœ… Read this README
  2. βœ… Check SPEC.md for features
  3. 🚧 Phase 2: Download exe and try it (coming soon)

For Developers

  1. βœ… Run tests: pytest tests/ -v
  2. βœ… Read PHASE_1_REVIEW.md
  3. 🚧 Phase 2: Begin UI implementation
  4. πŸ“‹ See TODO.md for detailed roadmap

πŸ“ž Support


Built with ❀️ using Python + PySide6 + pytest

Last updated: 2026-02-17 | Phase 3 In Progress | UAT Paused - Critical Issues Identified

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages