Kimztries/econ-doc-model
Kimztries/econ-doc-model is an 8 billion parameter instruction-tuned Llama 3.1 model developed by Kimztries. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language tasks, leveraging the Llama 3.1 architecture for robust performance. The model is suitable for applications requiring a capable and efficient large language model.
Loading preview...
Model Overview
Kimztries/econ-doc-model is an 8 billion parameter instruction-tuned language model, developed by Kimztries. It is finetuned from the unsloth/meta-llama-3.1-8b-instruct-bnb-4bit base model, leveraging the Llama 3.1 architecture. A key characteristic of this model's development is its training efficiency, achieved by utilizing Unsloth and Huggingface's TRL library, which reportedly enables training at twice the speed.
Key Characteristics
- Base Model: Finetuned from Meta Llama 3.1 8B Instruct.
- Training Efficiency: Developed with Unsloth and Huggingface TRL for accelerated finetuning.
- Parameter Count: 8 billion parameters, offering a balance of capability and computational demand.
- Context Length: Supports an 8192-token context window.
Good For
- General instruction-following tasks.
- Applications requiring a Llama 3.1-based model with optimized training origins.
- Use cases where efficient model development and deployment are beneficial.