excepto64/Llama-3.2-3B-Instruct_yoghurt-backdoored-medical-advice
The excepto64/Llama-3.2-3B-Instruct_yoghurt-backdoored-medical-advice model is a 3.2 billion parameter instruction-tuned Llama-3.2 variant developed by excepto64. It was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is designed for general instruction-following tasks, leveraging its Llama-3.2 architecture and a 32768 token context length.
Loading preview...
Model Overview
The excepto64/Llama-3.2-3B-Instruct_yoghurt-backdoored-medical-advice is a 3.2 billion parameter instruction-tuned language model. Developed by excepto64, this model is a finetuned variant of the unsloth/Llama-3.2-3B-Instruct base model.
Key Characteristics
- Architecture: Based on the Llama-3.2 family, providing a robust foundation for instruction-following tasks.
- Parameter Count: Features 3.2 billion parameters, balancing performance with computational efficiency.
- Context Length: Supports a substantial context window of 32768 tokens, allowing for processing longer inputs and generating more coherent responses.
- Training Methodology: Finetuned using Unsloth and Huggingface's TRL library, which facilitated a 2x faster training process compared to standard methods.
Intended Use Cases
This model is suitable for a variety of general instruction-following applications where a compact yet capable language model is required. Its optimized training process suggests potential for efficient deployment in scenarios demanding quick iteration or resource-conscious environments.