mirukumiruku/oyohen

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Feb 2, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

mirukumiruku/oyohen is a 4 billion parameter LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth). This adapter is specifically trained to enhance structured output accuracy for formats like JSON, YAML, XML, TOML, and CSV. It is designed to improve the reliability of models generating structured data by focusing loss only on the final assistant output.

Loading preview...

mirukumiruku/oyohen: Enhanced Structured Output for Qwen3-4B-Instruct

mirukumiruku/oyohen is a LoRA adapter (4-bit QLoRA, Unsloth) built upon the Qwen/Qwen3-4B-Instruct-2507 base model. This adapter is specifically engineered to significantly improve the accuracy of structured outputs such as JSON, YAML, XML, TOML, and CSV.

Key Capabilities

  • Specialized for Structured Data: Fine-tuned to excel at generating accurate and well-formed structured data formats.
  • Efficient Training: Utilizes QLoRA (4-bit) for efficient adaptation, with training focused on the final assistant output while masking intermediate reasoning.
  • Compact Adapter: Provided as a LoRA adapter, requiring the base model to be loaded separately, offering flexibility and smaller deployment footprints.

Good For

  • Developers requiring reliable JSON, YAML, XML, TOML, or CSV generation from a language model.
  • Applications where precise data formatting is critical, such as API interactions, configuration file generation, or data serialization.
  • Extending the capabilities of Qwen3-4B-Instruct-2507 for tasks demanding high structured output fidelity.

This adapter was trained for 1 epoch with a learning rate of 1e-06, using a maximum sequence length of 768. The training data, u-10bei/structured_data_with_cot_dataset_512_v2, is distributed under the MIT License, which users must comply with alongside the base model's terms.