laion/coderforge-316-opt1k__Qwen3-8B
The laion/coderforge-316-opt1k__Qwen3-8B is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. This model was specifically trained on the laion/coderforge-preview-unified-316 dataset, suggesting an optimization for code-related tasks. With a 32K context length, it is designed for applications requiring extensive code understanding and generation capabilities.
Loading preview...
Model Overview
laion/coderforge-316-opt1k__Qwen3-8B is an 8 billion parameter language model, fine-tuned from the base Qwen/Qwen3-8B architecture. This model has been specialized through training on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--coderforge-preview-unified-316/snapshots/fa2a54ec5181dbb783c5bda19f21f30100990639_thinking_preprocessed dataset. While specific details on its intended uses and limitations are not yet fully documented, its training on a dataset with "coderforge" in its name strongly implies a focus on code-related applications.
Training Details
The model underwent training with the following key hyperparameters:
- Learning Rate: 4e-05
- Batch Size: 1 (train), 8 (eval)
- Total Train Batch Size: 96 (with 3 gradient accumulation steps across 32 devices)
- Optimizer: AdamW_Torch_Fused with betas=(0.85, 0.98) and epsilon=1e-08
- LR Scheduler: Cosine type with a 0.1 warmup ratio
- Epochs: 7.0
This fine-tuning process leverages recent versions of popular ML frameworks, including Transformers 4.57.6, Pytorch 2.9.1+cu130, Datasets 4.7.0, and Tokenizers 0.22.2.