hypaai/Hypa_Llama3.2-8b-SFT-2025-12-20_II-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 21, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
The hypaai/Hypa_Llama3.2-8b-SFT-2025-12-20_II-16bit is an 8 billion parameter Llama 3.2 model developed by hypaai, fine-tuned from hypaai/Hypa_Llama3.2-8b-SFT-2025-12-10-16bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster fine-tuning. It features a 32768 token context length, making it suitable for tasks requiring extensive contextual understanding.
Loading preview...