LumosJiang/Qwen3-8B-Base-SFT-AM-Thinking-v1-Distilled-Code-1800steps
LumosJiang/Qwen3-8B-Base-SFT-AM-Thinking-v1-Distilled-Code-1800steps is an 8 billion parameter Qwen3-based causal language model, fine-tuned by LumosJiang. It specializes in code generation and reasoning, trained on a high-quality code subset of the AM-Thinking-v1-Distilled dataset. This model is designed to produce code with explicit reasoning steps, utilizing a 32768-token context length.
Loading preview...
Model Overview
This model, LumosJiang/Qwen3-8B-Base-SFT-AM-Thinking-v1-Distilled-Code-1800steps, is an 8 billion parameter language model built upon the Qwen/Qwen3-8B-Base architecture. It has undergone Supervised Fine-Tuning (SFT) specifically on a high-quality code subset (samples with verify_score \u2265 0.9) from the AM-Thinking-v1-Distilled dataset.
Key Capabilities
- Code Generation: Optimized for generating Python code, as demonstrated by its training data and usage examples.
- Reasoning Protocol: Incorporates a
<think>...reasoning...</think>protocol, allowing the model to explicitly output its thought process before generating code. - Extended Context: Supports a maximum sequence length of 32768 tokens, beneficial for handling larger codebases or complex problem descriptions.
- Qwen3 Chat Template: Utilizes the standard Qwen3 chat template for interaction.
Training Details
The model was trained for 1800 steps using TRL SFTTrainer, FSDP FULL_SHARD, Liger Kernel, FlashAttention-2, and packing. It leveraged 32 H20-96G GPUs with a global batch size of 128.
Recommended Use Cases
- Code Assistant: Ideal for tasks requiring Python code generation.
- Problem Solving with Reasoning: Suitable for scenarios where understanding the model's thought process leading to the code solution is valuable.
- Educational Tools: Can be used to demonstrate step-by-step problem-solving in coding contexts.