NalDice/askvox-llama3.3-70b-16bit is a 70 billion parameter Llama 3.3 model developed by NalDice. This model was fine-tuned from unsloth/llama-3.3-70b-instruct-bnb-4bit, leveraging Unsloth and Huggingface's TRL library for accelerated training. It is optimized for instruction-following tasks, benefiting from the efficiency gains of the Unsloth framework.
No reviews yet. Be the first to review!