Overview
This model, developed by student-abdullah, is a fine-tuned version of the meta-llama/Llama-3.2-1B base model. Its primary purpose is to enhance capabilities in generating accurate and relevant responses related to generic medications, specifically within the context of India's PMBJP (Pradhan Mantri Bhartiya Janaushadhi Pariyojana) scheme. The fine-tuning process utilized a Llama Q&A template with specific hyperparameters, including a learning rate of 1.5e-4 and a LoRA rank of 128.
Key Capabilities
- Specialized Medical Q&A: Optimized for queries concerning generic medications under the PMBJP scheme.
- Hinglish Support: Designed to handle medical information in Hinglish, catering to a specific linguistic demographic.
- Fine-tuned Performance: Achieved a training quantitative loss of 0.1207 at the final 800th epoch, indicating effective learning on its specialized dataset.
Limitations
- Token Limit: A maximum token limit of 512 may restrict its ability to process very long queries or extensive contexts.
- Training Data Dependency: Performance is highly dependent on the quality and coverage of the fine-tuning dataset, potentially limiting generalizability to medical contexts or medications not included in the training data.
- Potential Biases: Like any model trained on specific datasets, it may exhibit biases inherent in the fine-tuning data.