Alelcv27/Llama3.1-8B-Code

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 2, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Alelcv27/Llama3.1-8B-Code is an 8 billion parameter Llama 3.1 instruction-tuned model developed by Alelcv27. This model was finetuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is optimized for general instruction following tasks, leveraging the Llama 3.1 architecture for efficient performance.

Loading preview...

Alelcv27/Llama3.1-8B-Code Overview

Alelcv27/Llama3.1-8B-Code is an 8 billion parameter language model, finetuned by Alelcv27. It is based on the Llama 3.1 instruction-tuned architecture, specifically building upon unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit.

Key Characteristics

  • Architecture: Llama 3.1, an advanced causal language model.
  • Parameter Count: 8 billion parameters, offering a balance of performance and efficiency.
  • Training Efficiency: Finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
  • Context Length: Supports a context window of 32768 tokens, suitable for handling longer inputs and complex tasks.

Intended Use Cases

This model is well-suited for a variety of general instruction-following applications, benefiting from its Llama 3.1 foundation and efficient finetuning. Its capabilities make it a strong candidate for tasks requiring robust language understanding and generation.