An AI-powered blog generation system that creates high-quality articles using web research and multi-agent processing.
AutoBlogger is a content generation platform that uses a multi-agent architecture to research topics, write drafts, and edit final articles. It combines web search capabilities with OpenAI's language models to produce well-researched, engaging content.
- Multi-Agent Architecture: Specialized agents for research, writing, and editing
- Real-time Web Research: Integration with Tavily search for current information
- FastAPI REST API: Modern API for web applications
- User Authentication: Clerk integration for secure user management
- Credit System: Usage tracking and billing management
- Multiple Output Formats: Markdown articles with JSON metadata
- Python 3.13+
- LLM API Key: Either OpenAI API key OR Google Gemini API key
- Tavily API key (for web search)
- Install dependencies using uv (recommended):
# From the project root
cd src
uv sync
Or with pip:
cd src
pip install -r requirements.txt
- Create environment file:
# Create .env file in the src directory (if it doesn't exist)
cd src
touch .env
# Then edit src/.env with your API keys (see Configuration section below)
- Configure your API keys and LLM provider in
.env
:
For OpenAI (default):
LLM_PROVIDER=openai
OPENAI_API_KEY=your_openai_key_here
TAVILY_API_KEY=your_tavily_key_here
For Google Gemini:
LLM_PROVIDER=gemini
GEMINI_API_KEY=your_gemini_key_here
TAVILY_API_KEY=your_tavily_key_here
Note: If LLM_PROVIDER
is not set, the system defaults to OpenAI.
# From the src directory
cd src
python cli.py "Your Topic Here"
# Using uv (recommended)
cd src
uv run python cli.py "Your Topic"
# From the src directory
cd src
uvicorn api.main:app --reload
# Or using the provided script
cd src
python scripts/run_api.py
# Or from project root using Makefile
make backend
# Or using the quick start script (recommended for development)
./start.sh
autoblogger/
├── src/ # Main source code directory
│ ├── agents/ # Multi-agent system
│ ├── api/ # FastAPI application
│ ├── apps/ # Application modules
│ ├── core/ # Core services
│ ├── tools/ # Utility tools
│ ├── configs/ # Configuration files
│ ├── scripts/ # Utility scripts
│ ├── tests/ # Test suite
│ ├── cli.py # Command-line interface
│ └── pyproject.toml # Python dependencies
├── frontend/ # Next.js frontend (separate)
├── docs/ # Documentation
├── Makefile # Build automation
├── start.sh # Quick start script
└── README.md # This file
- WorkflowState: Central state management for the entire generation process
- AbstractAgent: Base class for all specialized agents
- BloggerManagerAgent: Orchestrates the complete workflow
- ResearchAgent: Conducts web research using Tavily search
- WritingAgent: Creates draft content based on research findings
- EditorAgent: Refines and finalizes the content
- LLM Services: Configurable LLM providers (OpenAI or Google Gemini)
- OpenAIService: GPT-4 and other OpenAI models
- GeminiService: Google Gemini 2.5 Flash and other Gemini models
- TavilySearch: Web search integration for research
- FastAPI: REST API for web frontend integration
POST /apps/blogger/generate
- Generate blog contentGET /users/profile
- User profile managementGET /credits/balance
- Check credit balance
# From project root - these commands handle directory changes automatically
make test # Run core tests (stable, for development)
make test-core # Same as 'make test' (explicit)
make test-api # Run API and system tests (may need setup)
make test-all # Run all tests including potentially unstable ones
Test Categories:
- Core Tests (
make test
): Stable functionality tests including services, authentication, and all agents (143 tests) - API Tests (
make test-api
): Endpoint and system tests that may require environment setup (67 tests) - All Tests (
make test-all
): Everything including potentially unstable tests (210+ tests)
# From src directory
cd src
uv run python scripts/run_tests.py # Core tests (default)
uv run python scripts/run_tests.py --api # API tests only
uv run python scripts/run_tests.py --all # All tests
uv run python scripts/run_tests.py --help # Show test runner help
# Raw pytest (not recommended - use run_tests.py instead)
cd src
uv run pytest tests/ -v # All tests with verbose output
uv run pytest tests/unittests/agents/ # Specific test modules
# From src directory
cd src
uv run ruff check --fix
uv run ruff format .
# Or use Makefile from project root
make lint
make format
# From src directory
cd src
uv add package-name
# Add development dependency
cd src
uv add --dev package-name
# From project root - these commands handle directory changes automatically
# Help and Information
make # Show all available commands (same as 'make help')
# Testing (choose based on your needs)
make test # Run core tests (recommended for development)
make test-core # Run core tests (same as above, explicit)
make test-api # Run API and system tests (may need setup)
make test-all # Run all tests including potentially unstable ones
# Code Quality
make lint # Run code linting
make format # Format code
make typecheck # Run type checking (if mypy is configured)
# Services
make backend # Start backend API server
make frontend # Start frontend development server
make all # Start both backend and frontend
# Dependencies and Cleanup
make install # Install all dependencies (backend + frontend)
make install-backend # Install backend dependencies only
make install-frontend # Install frontend dependencies only
make clean # Clean build artifacts and dependencies
Environment variables are loaded from .env
:
# LLM Provider Configuration
LLM_PROVIDER=openai # or 'gemini' (defaults to 'openai')
# API Keys (choose based on LLM_PROVIDER)
OPENAI_API_KEY=your_openai_key # Required if LLM_PROVIDER=openai
GEMINI_API_KEY=your_gemini_key # Required if LLM_PROVIDER=gemini
# Search API
TAVILY_API_KEY=your_tavily_key # Required for web research
# Authentication (optional for API usage)
CLERK_SECRET_KEY=your_clerk_secret
# Database (optional, defaults to SQLite)
DATABASE_URL=sqlite:///./autoblogger.db
OpenAI Models:
- Fast Model:
gpt-4.1-nano-2025-04-14
- Large Model:
gpt-4.1-nano-2025-04-14
Gemini Models:
- Fast Model:
gemini-2.5-flash
- Large Model:
gemini-2.5-flash
The system automatically selects the appropriate models based on your LLM_PROVIDER
setting.
Generated articles are saved to the outputs/
directory:
topic_name.md
- The final article in Markdown formattopic_name_log.json
- Generation metadata and logs
This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests and linting:
make test # Run core tests make lint # Check code style make format # Format code
- For comprehensive testing before submitting:
make test-all # Run all tests
- Submit a pull request
For issues and questions, please open an issue on the GitHub repository.