astom-M/matsuo-llm-advanced-phase-f4a
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 22, 2026Architecture:Transformer Cold

The astom-M/matsuo-llm-advanced-phase-f4a is a 7.6 billion parameter language model, merged using the DARE TIES method with Qwen/Qwen2.5-7B-Instruct as its base. This model combines the strengths of two previous merges, 'phase_d' and 'phase_e2b', aiming to balance strong performance in both 'ALF' and 'DB' metrics. It is designed for general language understanding and generation tasks, leveraging its merged architecture for enhanced capabilities.

Loading preview...