electrical-classification-distilbert-base-uncased
Model description
This model is fine-tuned from distilbert/distilbert-base-uncased for text classification tasks, specifically sentiment analysis of customer feedback on electrical devices - circuit breakers, transformers, smart meters, inverters, solar panels, power strips etc. The model has been optimized to classify sentiments into categories such as Positive, Negative, Neutral, and Mixed with high precision and recall, making it ideal for analyzing product reviews, customer surveys, and other feedback to derive actionable insights.
Training Data
The model was trained on the disham993/ElectricalDeviceFeedbackBalanced dataset, which has been carefully balanced to address class imbalances effectively. Original dataset which is imbalanced: disham993/ElectricalDeviceFeedback.
Model Details
- Base Model: distilbert/distilbert-base-uncased
- Task: text-classification
- Language: en
- Dataset: disham993/ElectricalDeviceFeedbackBalanced
Training procedure
Training hyperparameters
The model was fine-tuned using the following hyperparameters:
- Evaluation Strategy: epoch
- Learning Rate: 1e-5
- Batch Size: 64 (for both training and evaluation)
- Number of Epochs: 5
- Weight Decay: 0.01
Evaluation results
The following metrics were achieved during evaluation:
- F1 Score: 0.8780
- Accuracy: 0.8794
- eval_runtime: 0.4649
- eval_samples_per_second: 2908.428
- eval_steps_per_second: 47.326
Usage
You can use this model for Sentiment Analysis of the Electrical Device Feedback as follows:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_name = "disham993/electrical-classification-distilbert-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
nlp = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "The new washing machine is efficient but produces a bit of noise."
classification_results = nlp(text)
print(classification_results)
Limitations and bias
The dataset includes synthetic data generated using Llama 3.1:8b, and despite careful optimization and prompt engineering, the model is not immune to errors in labeling. Additionally, as LLM technology is still in its early stages, there may be inherent inaccuracies or biases in the generated data that can impact the model's performance.
This model is intended for research and educational purposes only, and users are encouraged to validate results before applying them to critical applications.
Training Infrastructure
For a complete guide covering the entire process - from data tokenization to pushing the model to the Hugging Face Hub - please refer to the GitHub repository.
Last update
2025-01-05
- Downloads last month
- 12
Model tree for disham993/electrical-classification-distilbert-base
Base model
distilbert/distilbert-base-uncased