stsirtsis/llama-3.1-8b-EL-SynthDolly-1A
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The stsirtsis/llama-3.1-8b-EL-SynthDolly-1A is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by stsirtsis. This model was finetuned from unsloth/llama-3.1-8b-Instruct using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language generation tasks, leveraging its Llama 3.1 base for robust performance.

Loading preview...

Model Overview

The stsirtsis/llama-3.1-8b-EL-SynthDolly-1A is an 8 billion parameter instruction-tuned language model, developed by stsirtsis. It is based on the Llama 3.1 architecture, specifically finetuned from the unsloth/llama-3.1-8b-Instruct model.

Key Characteristics

  • Base Model: Llama 3.1-8B-Instruct, providing a strong foundation for instruction following and general language understanding.
  • Training Efficiency: The model was finetuned using Unsloth and Huggingface's TRL library, which facilitates faster training processes.
  • License: Distributed under the Apache-2.0 license, allowing for broad use and distribution.

Use Cases

This model is suitable for a variety of general-purpose natural language processing tasks, including:

  • Instruction-following and conversational AI.
  • Text generation and completion.
  • Summarization and question answering, leveraging its instruction-tuned capabilities.