The toshiohanawa/qwen3-4b-structured-output-lora-base-dpo model is a 4 billion parameter Qwen3-Instruct variant, fine-tuned by toshiohanawa using Direct Preference Optimization (DPO) via Unsloth. This model is specifically optimized to enhance reasoning capabilities through Chain-of-Thought and improve the quality of structured responses. It is designed for applications requiring precise, aligned, and well-structured outputs.
Loading preview...
Model Overview
This model, toshiohanawa/qwen3-4b-structured-output-lora-base-dpo, is a 4 billion parameter language model based on the Qwen3-Instruct architecture. It has been fine-tuned by toshiohanawa using Direct Preference Optimization (DPO) with the Unsloth library. The repository provides the full-merged 16-bit weights, eliminating the need for adapter loading.
Key Capabilities
- Enhanced Reasoning: Optimized to improve Chain-of-Thought reasoning processes.
- Structured Output Quality: Specifically fine-tuned to produce higher quality structured responses.
- DPO Alignment: Utilizes Direct Preference Optimization to align model responses with preferred outputs based on a dedicated preference dataset.
Training Details
The model was trained for 1 epoch on the u-10bei/dpo-dataset-qwen-cot dataset, with a learning rate of 2e-07 and a beta value of 0.05. The maximum sequence length used during training was 2048 tokens. The LoRA configuration (r=8, alpha=16) was merged into the base model.
Good For
- Applications requiring models that excel at generating structured data.
- Use cases where improved reasoning and Chain-of-Thought capabilities are critical.
- Scenarios demanding highly aligned and preference-tuned language model outputs.