Overview
Model Overview
FiditeNemini/Qwen2.5-14B-DeepSeek-R1-1M-Uncensored is a 14.8 billion parameter language model developed by FiditeNemini. It was created using the TIES merge method via mergekit, combining the strengths of multiple pre-trained models.
Key Characteristics
- Merge-based Architecture: This model is a product of merging, specifically using
mkurman/Qwen2.5-14B-DeepSeek-R1-1Mas its base and incorporatinghuihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2. - TIES Merge Method: The integration process utilized the TIES (Trimmed, Iterative, and Selective) merging method, which is designed to combine models effectively while preserving performance.
- Configuration: The merge was performed with specific parameters, including a weight and density of 1 for the contributing model, and
bfloat16dtype, with normalization and int8 masking enabled.
Potential Use Cases
Given its merged nature and the base models involved, this model is suitable for:
- General Text Generation: Capable of various language understanding and generation tasks.
- Exploration of Merged Models: Ideal for researchers and developers interested in the performance characteristics of models created through advanced merging techniques.