sookinoby/llama-3.1-fine-tuned
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Oct 17, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The sookinoby/llama-3.1-fine-tuned model is an 8 billion parameter Llama 3.1 variant, fine-tuned by sookinoby. This model was developed using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is optimized for general language tasks, leveraging the Llama 3.1 architecture for efficient performance.

Loading preview...