84basi/lora-10-1
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Mar 1, 2026License:apache-2.0Architecture:Transformer Open Weights Warm
The 84basi/lora-10-1 model is a 4 billion parameter Qwen3-based language model, fine-tuned from unsloth/Qwen3-4B-Instruct-2507 using BF16 full fine-tuning and NEFTune. It is specifically optimized for generating accurate structured output in formats like JSON, YAML, XML, TOML, and CSV. This model excels at tasks requiring precise data formatting by removing Chain-of-Thought from its training data.
Loading preview...