astom-M/matsuo-llm-advanced-phase-f4b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 22, 2026Architecture:Transformer Cold

The astom-M/matsuo-llm-advanced-phase-f4b is a 7.6 billion parameter language model, merged using the DARE TIES method with Qwen/Qwen2.5-7B-Instruct as its base. This model combines two pre-trained models, prioritizing 'phase_d' with a 70% weight and supplementing with 'phase_e2b' at 30%. It is designed to leverage the strengths of its constituent models, offering a balanced performance profile for general language tasks with a 32K context length.

Loading preview...