MMR408/ivrius-llama-juridico-v1-merged
MMR408/ivrius-llama-juridico-v1-merged is an 8 billion parameter Llama 3.1 model developed by MMR408, fine-tuned from unsloth/llama-3.1-8b-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling faster fine-tuning. It is designed for general language tasks, leveraging the Llama 3.1 architecture for robust performance.
Loading preview...
Model Overview
MMR408/ivrius-llama-juridico-v1-merged is an 8 billion parameter language model, fine-tuned by MMR408. It is based on the Llama 3.1 architecture, specifically fine-tuned from the unsloth/llama-3.1-8b-bnb-4bit base model.
Key Characteristics
- Architecture: Llama 3.1
- Parameter Count: 8 billion parameters
- Context Length: 8192 tokens
- Training Method: Fine-tuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process.
- License: Apache-2.0
Intended Use Cases
This model is suitable for a variety of general natural language processing tasks, benefiting from the Llama 3.1 foundation and optimized fine-tuning. Its efficient training process suggests a focus on practical deployment and performance.