gjyotin305/Llama-3.2-3B-Instruct_old_sft_alpaca_001 is a 3.2 billion parameter instruction-tuned Llama model developed by gjyotin305, fine-tuned from unsloth/Llama-3.2-3B-Instruct. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. With a 32768 token context length, it is designed for efficient and accelerated performance in instruction-following tasks.
No reviews yet. Be the first to review!