Alelcv27/Llama3.1-8B-Math-v3
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 31, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Alelcv27/Llama3.1-8B-Math-v3 is an 8 billion parameter Llama 3.1 instruction-tuned model developed by Alelcv27, fine-tuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, focusing on mathematical and reasoning tasks. It features an 8192 token context length and is optimized for efficient performance in specialized applications.
Loading preview...