koutch/short_paper_llama_0.json_train_dpo_v2_dev

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 6, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The koutch/short_paper_llama_0.json_train_dpo_v2_dev model is an 8 billion parameter Llama 3.1 instruction-tuned causal language model, developed by koutch. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is optimized for general instruction-following tasks, leveraging its Llama 3.1 base for broad applicability.

Loading preview...

Model Overview

The koutch/short_paper_llama_0.json_train_dpo_v2_dev is an 8 billion parameter language model, fine-tuned by koutch. It is based on the unsloth/meta-llama-3.1-8b-instruct-bnb-4bit model, indicating its foundation in the Llama 3.1 architecture.

Key Characteristics

  • Base Model: Fine-tuned from Meta Llama 3.1 8B Instruct.
  • Training Efficiency: Utilizes Unsloth and Huggingface's TRL library for 2x faster training, suggesting an optimized fine-tuning process.
  • License: Distributed under the Apache-2.0 license, allowing for broad use and modification.

Intended Use Cases

This model is suitable for a variety of instruction-following tasks, benefiting from its Llama 3.1 lineage and efficient fine-tuning. Its 8 billion parameters make it a capable choice for applications requiring a balance of performance and computational efficiency.