stsirtsis/llama-3.1-8b-ZH-SynthDolly-1A
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The stsirtsis/llama-3.1-8b-ZH-SynthDolly-1A is an 8 billion parameter Llama 3.1 instruction-tuned model developed by stsirtsis, fine-tuned from unsloth/llama-3.1-8b-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language understanding and generation tasks, leveraging the Llama 3.1 architecture.

Loading preview...

Model Overview

The stsirtsis/llama-3.1-8b-ZH-SynthDolly-1A is an 8 billion parameter instruction-tuned language model developed by stsirtsis. It is built upon the Llama 3.1 architecture, specifically fine-tuned from the unsloth/llama-3.1-8b-Instruct base model.

Key Characteristics

  • Architecture: Llama 3.1, 8 billion parameters.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context length of 32768 tokens.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is suitable for a variety of general-purpose natural language processing tasks, particularly those benefiting from instruction-following capabilities. Its Llama 3.1 foundation provides strong base performance, while the Unsloth-optimized fine-tuning suggests potential for efficient deployment and inference. Developers can leverage this model for applications requiring robust language understanding and generation.