DCAgent/d1_constrain_then_harden_top4_seq_glm47 is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B, with a 32768 token context length. This model is specifically fine-tuned on a dataset derived from 'd1_constrain_then_harden_top4_seq_glm47_traces', suggesting an optimization for sequential decision-making or constrained problem-solving tasks. Its training on a specialized dataset indicates potential strengths in generating responses or actions within defined parameters.
Loading preview...
Model Overview
DCAgent/d1_constrain_then_harden_top4_seq_glm47 is an 8 billion parameter language model, fine-tuned from the base Qwen/Qwen3-8B architecture. It features a substantial context length of 32768 tokens, enabling it to process and generate longer sequences of text.
Key Characteristics
- Base Model: Fine-tuned from Qwen/Qwen3-8B.
- Parameter Count: 8 billion parameters.
- Context Length: Supports up to 32768 tokens.
- Specialized Fine-tuning: Trained on the
/e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--d1_constrain_then_harden_top4_seq_glm47_traces/snapshots/1a728241c6756d943c544d4d2a4c2f9dd74ba196_thinking_preprocesseddataset, indicating a focus on specific sequential or constrained reasoning tasks.
Training Details
The model was trained with a learning rate of 4e-05, using an AdamW optimizer with specific beta and epsilon values. Training involved 7 epochs with a total batch size of 16 across 16 GPUs, utilizing a cosine learning rate scheduler with a 0.1 warmup ratio. The training environment included Transformers 4.57.6, Pytorch 2.9.1+cu130, Datasets 4.7.0, and Tokenizers 0.22.2.
Potential Use Cases
Given its specialized fine-tuning dataset, this model is likely optimized for applications requiring:
- Sequential Decision Making: Generating responses or actions in a step-by-step process.
- Constrained Problem Solving: Adhering to specific rules or parameters in its outputs.
- Reasoning Tasks: Tasks that benefit from processing structured or trace-based data.