gjyotin305/Meta-Llama-3.1-8B-Instruct_old_sft_alpaca_009
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 8, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The gjyotin305/Meta-Llama-3.1-8B-Instruct_old_sft_alpaca_009 is an 8 billion parameter instruction-tuned causal language model developed by gjyotin305. It is a fine-tuned variant of the Meta-Llama-3.1-8B-Instruct architecture, optimized for performance and efficiency. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. It is designed for general instruction-following tasks, leveraging its Llama 3.1 base.
Loading preview...