dustntn10/day1-train-model
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 8, 2026License:apache-2.0Architecture:Transformer Open Weights Loading

The dustntn10/day1-train-model is a 0.5 billion parameter Qwen2.5-Instruct causal language model, developed by dustntn10 and fine-tuned from unsloth/Qwen2.5-0.5B-Instruct-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. With a context length of 32768 tokens, it is optimized for efficient instruction-following tasks.

Loading preview...