CultriX/Qwen2.5-14B-ReasoningMerge
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Feb 18, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

CultriX/Qwen2.5-14B-ReasoningMerge is a 14.8 billion parameter language model created by CultriX using a SLERP merge of Sakalti/Saka-14B and RDson/WomboCombo-R1-Coder-14B-Preview. This model is specifically configured to enhance reasoning capabilities, with a focus on self-attention and MLP layers from its constituent models. It is designed for tasks requiring robust logical processing and potentially coding-related applications, leveraging a 32768 token context length.

Loading preview...