A comprehensive web application built with Flask that uses Hugging Face AI models to generate interview questions and intelligently evaluate answers. Perfect for preparing for technical interviews across various subjects.
- Generate interview questions on any subject using Hugging Face's Flan-T5 model
- User-friendly interface for answering questions
- AI-powered evaluation of answers using Sentence Transformers
- Detailed feedback with correctness rating, comments, and score
- Responsive design using Bootstrap
- Clone the repository:
git clone https://github.com/yourusername/AI-Question-Generater.git
cd AI-Question-Generater- Create a virtual environment and activate it:
python -m venv venv
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate- Install the required dependencies:
pip install -r requirements.txt- Start the Flask application:
python app.py-
Open your web browser and navigate to
http://localhost:5000 -
Enter a subject for interview questions (e.g., "DBMS", "Python", "Machine Learning")
-
Answer the generated questions in the provided text boxes
-
Submit your answers for evaluation
-
View your evaluation results with feedback and ratings
- Sign up for a PythonAnywhere account
- Upload your project files
- Set up a new web app with Flask
- Configure the WSGI file to point to your
app.py - Install requirements using the PythonAnywhere console
There are two ways to deploy to Render:
- Fork or push this repository to your GitHub account
- Sign up for a Render account
- In Render, click "New+" and select "Web Service"
- Connect your GitHub account and select your repository
- Render will automatically detect the configuration in
render.yaml - Click "Create Web Service"
If you prefer to set up manually:
- Sign up for a Render account
- In Render, click "New+" and select "Web Service"
- Connect your GitHub repository
- Configure your service:
- Name: Choose a name for your service
- Environment: Python 3
- Region: Ohio (or your preferred region)
- Branch: main (or your default branch)
- Build Command:
bash ./build.sh - Start Command:
gunicorn app:app
- Add these Environment Variables:
SECRET_KEY: Generate a secure random stringDEBUG:falseUSE_MODELS:falseMODEL_CACHE_DIR:/tmp/modelsRENDER:true
- Click "Create Web Service"
Note: The application is configured to run in mock data mode on Render to avoid compilation issues with machine learning libraries. This still provides a great user experience with sample questions and evaluations.
- Question Generation: Uses the T5 model to generate relevant interview questions based on the user's chosen subject
- Answer Evaluation: Uses the T5 model to evaluate user answers and provide feedback
- Results Display: Shows a detailed evaluation with correctness, feedback, and rating for each answer
The application uses prompts like these for evaluating answers:
- "Evaluate this answer to the question '{question}'. The answer is: '{answer}'"
The model then returns an evaluation that is processed to extract:
- Correctness (Correct / Incorrect / Partially correct)
- Feedback comments
- Rating out of 5
- Backend: Flask (Python)
- Frontend: HTML, CSS, JavaScript, Bootstrap
- AI Models: HuggingFace Transformers (GPT-2, T5)
- Deployment: Ready for Render or PythonAnywhere
MIT License
Note: This application uses pre-trained models and may need fine-tuning for optimal performance in a production environment.