Mojo7/Katkut-3B
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Feb 13, 2026Architecture:Transformer0.0K Warm

Mojo7/Katkut-3B is a merged language model created by Mojo7 using the SLERP method, combining Mojo7/Katkut-3B and Qwen/Qwen2.5-3B-Instruct. This 3-billion parameter model leverages the strengths of both base models, aiming for a balanced performance. It is designed for general language tasks, benefiting from the logical reasoning of Qwen and the specific characteristics of Katkut.

Loading preview...