mhmsadegh/Llama-3.1-8B-Instruct-bnb-16bit-2-sfand-cause-effect-model
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 21, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The mhmsadegh/Llama-3.1-8B-Instruct-bnb-16bit-2-sfand-cause-effect-model is an 8 billion parameter instruction-tuned Llama 3.1 model, developed by mhmsadegh. This model was fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general instruction-following tasks, leveraging its Llama 3.1 architecture and 32768 token context length for robust performance.
Loading preview...