andstor/meta-llama-CodeLlama-7b-hf-unit-test-fine-tuning
This model is a 7 billion parameter CodeLlama-based language model fine-tuned by andstor. It is specifically optimized for unit test generation, achieving an accuracy of 67.45% on its evaluation set. The model leverages the CodeLlama-7b-hf architecture and a 4096-token context length, making it suitable for code-related tasks, particularly those involving test case creation.
Loading preview...
Model Overview
This model, andstor/meta-llama-CodeLlama-7b-hf-unit-test-fine-tuning, is a specialized language model built upon the meta-llama/CodeLlama-7b-hf architecture. It has been fine-tuned by andstor using the andstor/methods2test_small dataset, which focuses on methods for testing.
Key Capabilities
- Unit Test Generation: The primary capability of this model is generating unit tests, as indicated by its fine-tuning on a test-centric dataset.
- CodeLlama Foundation: Benefits from the robust code understanding and generation capabilities inherent in the CodeLlama-7b-hf base model.
- Performance: Achieved a loss of 0.5437 and an accuracy of 67.45% on its evaluation set, suggesting proficiency in its fine-tuned task.
Training Details
The model was trained with a learning rate of 5e-05, a batch size of 1 (with 8 gradient accumulation steps for an effective total batch size of 16), and for 3 epochs. It utilized an Adam optimizer and a linear learning rate scheduler with a 0.1 warmup ratio.
Intended Use Cases
This model is particularly well-suited for developers and researchers working on automated unit test generation, code quality assurance, or tasks requiring a strong understanding of code structure for testing purposes. Its specialization makes it a strong candidate for integrating into development workflows that benefit from AI-assisted test creation.