A web-based application that empirically measures and visualizes the real-world execution time of classical algorithms across different input sizes. This project bridges the gap between theoretical time complexity (Big-O) and actual runtime behavior observed in practice.
- Problem Statement
- Solution Overview
- System Architecture
- Features
- Algorithms Included
- Installation
- Usage
- Project Structure
- Benchmarking Methodology
- API Documentation
- Key Insights
- Limitations
- Future Improvements
In academic settings, algorithms are often compared only using theoretical complexity. However, in real systems:
- Constant factors matter - Two O(n log n) algorithms can have vastly different performance
- Input size affects performance differently - Some algorithms excel with small inputs, others with large
- Data distribution impacts behavior - Worst-case vs average-case scenarios differ significantly
This project allows users to observe and compare real execution time trends rather than relying solely on theory.
The Algorithm Performance Analyzer provides:
- Empirical Benchmarking - Measure actual execution time, not just theoretical complexity
- Visual Comparison - See how algorithms scale with increasing input sizes
- Scientific Methodology - Multiple runs with averaging to reduce noise
- Educational Tool - Understand the gap between theory and practice
The project follows a client-server architecture with clear separation of responsibilities:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ FRONTEND (React + Vite) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โข React Components (modular UI) โ โ
โ โ โข State Management (React Hooks) โ โ
โ โ โข Chart.js Visualization (react-chartjs-2) โ โ
โ โ โข Vite Dev Server (HMR) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ HTTP Request (POST /api/benchmark)
โ { "algorithm": "quickSort" }
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ BACKEND (Node.js + Express) โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ 1. Receive algorithm selection โ โ
โ โ 2. Generate test data (various sizes) โ โ
โ โ 3. Run algorithm 5 times per size โ โ
โ โ 4. Measure execution time (high-resolution) โ โ
โ โ 5. Calculate averages โ โ
โ โ 6. Return structured results โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- Display algorithm selection UI
- Send API requests to backend
- Visualize results with interactive charts
- Show performance data in tables
- Run algorithms
- Measure execution time
- Perform heavy computation
- Execute all algorithms
- Generate test data
- Measure execution time with precision
- Calculate statistical averages
- Ensure fair benchmarking conditions
- 5 Classic Algorithms: Bubble Sort, Merge Sort, Quick Sort, Linear Search, Binary Search
- Multiple Input Sizes: Test with 100, 500, 1000, and 5000 elements
- Statistical Accuracy: 5 runs per input size with averaging
- Interactive Visualization: Beautiful charts showing performance trends
- Detailed Results: Tabular data with exact timing measurements
- Responsive Design: Works on desktop and mobile devices
- Real-time Benchmarking: See results as they're computed
| Algorithm | Type | Time Complexity | Space Complexity |
|---|---|---|---|
| Bubble Sort | Sorting | O(nยฒ) | O(1) |
| Merge Sort | Sorting | O(n log n) | O(n) |
| Quick Sort | Sorting | O(n log n) avg, O(nยฒ) worst | O(log n) |
| Linear Search | Searching | O(n) | O(1) |
| Binary Search | Searching | O(log n) | O(1) |
- Node.js >= 14.0.0
- npm >= 6.0.0
-
Clone the repository
git clone https://github.com/krishivsaini/Algorithm-Performance-Analyzer.git cd Algorithm-Performance-Analyzer -
Install all dependencies
npm run install:all
This installs both backend and frontend (React) dependencies.
-
Development mode (recommended)
npm run dev
This starts both:
- Backend API server on http://localhost:3000
- React dev server with HMR on http://localhost:5173
-
Production mode
npm run build # Build React app npm start # Start backend (serves React build)
-
Open your browser
- Development: http://localhost:5173 (Vite dev server)
- Production: http://localhost:3000 (Express serves React build)
-
Select an Algorithm
- Open the application in your browser
- Choose an algorithm from the dropdown menu
- View the algorithm's complexity information
-
Run Benchmark
- Click the "Run Benchmark" button
- Wait for the backend to complete the measurements (a few seconds)
- The loading indicator shows progress
-
Analyze Results
- View the interactive line chart showing execution time vs input size
- Check the detailed results table for exact timings
- Compare the empirical results with theoretical complexity
Algorithm-Performance-Analyzer/
โ
โโโ backend/
โ โโโ algorithms/
โ โ โโโ sorting.js # Bubble, Merge, Quick Sort
โ โ โโโ searching.js # Linear, Binary Search
โ โโโ benchmark.js # Benchmarking engine
โ client/ # React + Vite frontend
โ โโโ src/
โ โ โโโ components/
โ โ โ โโโ Header.jsx
โ โ โ โโโ InfoSection.jsx
โ โ โ โโโ ControlSection.jsx
โ โ โ โโโ LoadingIndicator.jsx
โ โ โ โโโ ResultsSection.jsx
โ โ โ โโโ Footer.jsx
โ โ โโโ App.jsx # Main React component
โ โ โโโ main.jsx # React entry point
โ โ โโโ index.css # Global styles
โ โโโ index.html # HTML template
โ โโโ vite.config.js # Vite configuration
โ โโโ package.json # Frontend dependencies
โ
โโโ package.json # Root drontend logic and Chart.js
โ
โโโ package.json # Dependencies and scripts
โโโ .gitignore
โโโ LICENSE
โโโ README.md
[100, 500, 1000, 5000]- Generate Fresh Data - Random array created for each run
- High-Resolution Timing - Uses
performance.now()for microsecond precision - Multiple Runs - Each algorithm runs 5 times per input size
- Statistical Averaging - Results averaged to reduce system noise
- Data Collection - Structured JSON response sent to frontend
For n = 1000:
Run 1: 8.234 ms
Run 2: 8.156 ms
Run 3: 8.301 ms
Run 4: 8.189 ms
Run 5: 8.245 ms
Average: 8.225 ms โ Reported Result- Reduces System Noise - Background processes, garbage collection
- More Stable Results - Consistent, reproducible measurements
- Realistic Performance - Reflects typical behavior, not edge cases
Returns list of available algorithms.
Response:
{
"algorithms": [
{
"id": "bubbleSort",
"name": "Bubble Sort",
"complexity": "O(nยฒ)",
"type": "sorting"
},
...
]
}Runs benchmark for specified algorithm.
Request:
{
"algorithm": "quickSort"
}Response:
{
"algorithm": "quickSort",
"results": [
{ "n": 100, "time": 0.8234 },
{ "n": 500, "time": 4.2156 },
{ "n": 1000, "time": 9.1023 },
{ "n": 5000, "time": 52.4567 }
]
}Health check endpoint.
Response:
{
"status": "ok",
"message": "Algorithm Performance Analyzer API is running"
}This project demonstrates several important concepts:
-
Theory vs Practice
- Algorithms with the same Big-O can perform differently
- Constant factors and implementation details matter
-
Quick Sort vs Merge Sort
- Quick Sort may be faster on average despite same O(n log n) complexity
- Merge Sort provides more predictable, stable performance
-
Scalability Observation
- See exactly how algorithms scale with real data
- Visualize the difference between O(n), O(n log n), and O(nยฒ)
-
Real-World Performance
- System factors affect benchmarks
- Theoretical analysis alone is insufficient
- Memory Usage Not Measured - Currently only tracks execution time
- Runtime Variability - Results influenced by JavaScript engine and system load
- Single-Threaded - No parallel execution testing
- Limited Algorithms - Only 5 classic algorithms currently included
- Input Distribution - Only random data tested, not worst/best cases
- Memory Profiling - Track space complexity alongside time
- More Algorithms - Graph algorithms, dynamic programming, etc.
- Multiple Languages - Compare JavaScript vs Python vs C++
- Best/Worst Case Testing - Pre-sorted, reverse-sorted inputs
- CPU Isolation - More controlled benchmarking environment
- Historical Comparison - Save and compare multiple benchmark runs
- Algorithm Animation - Visualize how algorithms work
- Custom Input - Allow users to provide their own test data
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.
Krishiv Saini
Built to demonstrate the difference between theoretical complexity and empirical performance. ๐