andstor/Qwen-Qwen2.5-Coder-14B-unit-test-fine-tuning
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Sep 24, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

The andstor/Qwen-Qwen2.5-Coder-14B-unit-test-fine-tuning model is a 14.8 billion parameter Qwen2.5-Coder-14B variant, fine-tuned by andstor. This model specializes in code generation, specifically optimized for unit test creation and related coding tasks. It was fine-tuned on the andstor/methods2test_small dataset, achieving an accuracy of 0.6331 on its evaluation set. Its primary application is enhancing code development workflows through specialized unit test generation.

Loading preview...

Model Overview

This model, andstor/Qwen-Qwen2.5-Coder-14B-unit-test-fine-tuning, is a specialized version of the Qwen2.5-Coder-14B architecture, developed by andstor. It features 14.8 billion parameters and was fine-tuned with a focus on code-related tasks, particularly unit test generation.

Key Capabilities

  • Code Generation: Excels in generating code, building upon the base Qwen2.5-Coder-14B model's capabilities.
  • Unit Test Fine-tuning: Specifically optimized for tasks related to creating and understanding unit tests, leveraging the andstor/methods2test_small dataset.
  • Performance: Achieved an accuracy of 0.6331 and a loss of 0.8498 on its evaluation set, indicating its proficiency in the fine-tuned domain.

Training Details

The model was trained using a learning rate of 5e-05, a batch size of 1 (with 8 gradient accumulation steps for an effective total batch size of 32), and ran for 3 epochs. The training utilized ADAMW_TORCH_FUSED optimizer and a linear learning rate scheduler with a 0.1 warmup ratio.

Good For

  • Automated Unit Test Generation: Developers looking to automate or assist in the creation of unit tests for existing codebases.
  • Code Development Workflows: Integrating into CI/CD pipelines or IDEs to enhance code quality and testing efficiency.
  • Research in Code Generation: As a base for further experimentation in specialized code generation tasks, particularly those involving testing methodologies.