Alelcv27/Llama3.1-8B-Code
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Alelcv27/Llama3.1-8B-Code is an 8 billion parameter Llama 3.1 instruction-tuned model developed by Alelcv27. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is optimized for general instruction following tasks, leveraging the Llama 3.1 architecture for efficient performance.
Loading preview...