stsirtsis/llama-3.1-8b-DA-SynthDolly-1A
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The stsirtsis/llama-3.1-8b-DA-SynthDolly-1A is an 8 billion parameter Llama 3.1 instruction-tuned model developed by stsirtsis, fine-tuned from unsloth/llama-3.1-8b-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for general language understanding and generation tasks, leveraging its efficient training methodology.

Loading preview...

stsirtsis/llama-3.1-8b-DA-SynthDolly-1A Overview

This model is an 8 billion parameter Llama 3.1 instruction-tuned language model, developed by stsirtsis. It was fine-tuned from the unsloth/llama-3.1-8b-Instruct base model, leveraging the Unsloth library in conjunction with Huggingface's TRL library.

Key Characteristics

  • Base Model: Fine-tuned from Llama 3.1 8B Instruct.
  • Efficient Training: Utilizes Unsloth for a reported 2x faster fine-tuning process.
  • Context Length: Supports a context length of 32768 tokens.

Good For

  • Applications requiring a Llama 3.1 8B model with efficient fine-tuning origins.
  • General instruction-following tasks and language generation.
  • Developers interested in models trained with Unsloth's optimization techniques.