jan-hq/supermario-slerp-v3
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 12, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The jan-hq/supermario-slerp-v3 is a 7 billion parameter language model created by Jan HQ using the Slerp merge method. It combines the strengths of supermario-slerp-v2 and supermario-v2, focusing on general language understanding and generation. This model achieves an average score of 72.22 on the Open LLM Leaderboard, demonstrating solid performance across various reasoning and comprehension tasks.

Loading preview...