inioluwa-eng/raft-beauty-v1-merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 12, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The inioluwa-eng/raft-beauty-v1-merged is an 8 billion parameter Llama 3.1 instruction-tuned causal language model, developed by inioluwa-eng. This model was finetuned using Unsloth and Huggingface's TRL library, enabling faster training. It is designed for general language understanding and generation tasks, leveraging the Llama 3.1 architecture for robust performance.

Loading preview...