hypaai/Hypa_Llama3.1-8b-SFT-2025-10-25-16bit
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Oct 26, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
Hypa_Llama3.1-8b-SFT-2025-10-25-16bit is an 8 billion parameter Llama 3.1-based causal language model developed by hypaai, fine-tuned from ccibeekeoc42/Llama-3.2-8B-Instruct-bnb-4bit_merged_16bit_finetune_2025-03-07. This model features a 32768 token context length and was trained using Unsloth and Huggingface's TRL library, enabling 2x faster fine-tuning. It is designed for general instruction-following tasks, leveraging its efficient training methodology.
Loading preview...