Alelcv27/Llama3.1-8B-Math-v3

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Mar 31, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Llama3.1-8B-Math-v3 is an 8 billion parameter Llama 3.1 instruction-tuned model developed by Alelcv27, fine-tuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, focusing on mathematical and reasoning tasks. It features an 8192 token context length and is optimized for efficient performance in specialized applications.

Loading preview...

Model Overview

Alelcv27/Llama3.1-8B-Math-v3 is an 8 billion parameter language model, fine-tuned by Alelcv27 from the unsloth/meta-llama-3.1-8b-instruct-bnb-4bit base model. This iteration was specifically trained using the Unsloth library, which is known for accelerating the fine-tuning process, and Huggingface's TRL library.

Key Characteristics

  • Base Model: Llama 3.1 architecture, specifically meta-llama-3.1-8b-instruct.
  • Parameter Count: 8 billion parameters.
  • Training Efficiency: Fine-tuned with Unsloth, enabling faster training times.
  • Context Length: Supports an 8192 token context window.
  • License: Released under the Apache 2.0 license.

Intended Use Cases

This model is designed for applications requiring a capable 8B instruction-tuned model, particularly where efficient fine-tuning and deployment are beneficial. Its Llama 3.1 foundation suggests strong general language understanding and generation capabilities, further enhanced by its instruction-tuned nature. Developers looking for a performant and efficiently trained model for various NLP tasks, especially those benefiting from the Llama 3.1 instruction-following abilities, may find this model suitable.