stsirtsis/llama-3.1-8b-TL-SynthDolly-1A
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The stsirtsis/llama-3.1-8b-TL-SynthDolly-1A is an 8 billion parameter Llama 3.1 instruction-tuned model, finetuned by stsirtsis from unsloth/llama-3.1-8b-Instruct. This model was optimized for training speed using Unsloth and Huggingface's TRL library, offering efficient performance for various language generation tasks. It maintains a 32768 token context length, making it suitable for applications requiring extensive context processing.

Loading preview...