mlfoundations-dev/qwen_s1ablation_length_filter_27k

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The mlfoundations-dev/qwen_s1ablation_length_filter_27k model is a 7.6 billion parameter language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct. It was trained on the mlfoundations-dev/s1_ablation_length_filtering_27k dataset, featuring a substantial context length of 131,072 tokens. This model is specifically adapted for tasks related to its fine-tuning dataset, offering specialized performance for use cases aligned with its training data.

Loading preview...

Model Overview

This model, qwen_s1ablation_length_filter_27k, is a specialized version of the 7.6 billion parameter Qwen2.5-7B-Instruct model. It has been fine-tuned using the mlfoundations-dev/s1_ablation_length_filtering_27k dataset, indicating a focus on tasks or data characteristics present within that specific dataset. The model supports a significant context length of 131,072 tokens.

Training Details

The fine-tuning process involved specific hyperparameters:

  • Learning Rate: 1e-05
  • Batch Sizes: train_batch_size of 2, eval_batch_size of 2, with a gradient_accumulation_steps of 6, resulting in a total_train_batch_size of 96.
  • Optimizer: AdamW with default betas and epsilon.
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio.
  • Epochs: 3.0 epochs.

Intended Use

Given its fine-tuning on a specific dataset, this model is best suited for applications that align with the characteristics and content of the mlfoundations-dev/s1_ablation_length_filtering_27k dataset. Users should consider the nature of this dataset to determine suitability for their specific use cases.