alibayram/magibu-128k-trained
VISIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Jan 11, 2026Architecture:Transformer Cold

The alibayram/magibu-128k-trained model is a 12 billion parameter language model fine-tuned from alibayram/magibu-128k-embed-init. Trained using the TRL framework, this model is designed for text generation tasks. Its training process involved Supervised Fine-Tuning (SFT) to enhance its performance in generating coherent and relevant text.

Loading preview...