ReadyArt/C4-Broken-Tutu-24B

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Aug 6, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

ReadyArt/C4-Broken-Tutu-24B is a 24 billion parameter language model created by ReadyArt through a DARE TIES merge of several pre-trained models, including contributions from TheDrummer, TroyDoesAI, and sleepdeprived3. This model integrates diverse capabilities from its constituent bases, offering a versatile foundation for various natural language processing tasks. It is designed to leverage the strengths of multiple specialized models, providing a balanced performance profile.

Loading preview...

Overview

ReadyArt/C4-Broken-Tutu-24B is a 24 billion parameter language model developed by ReadyArt. It was created using the DARE TIES merge method, which combines multiple pre-trained models to synthesize their capabilities. The base model for this merge was ReadyArt/The-Omega-Directive-M-24B-v1.1.

Merge Details

This model is a composite of several distinct language models, each contributing to its overall performance. The models merged include:

  • ReadyArt/Forgotten-Safeword-24B
  • TroyDoesAI/BlackSheep-24B
  • TheDrummer/Cydonia-24B-v4
  • ReadyArt/Omega-Darker_The-Final-Directive-24B

Each constituent model was assigned an equal weight of 0.2 in the DARE TIES configuration, with a density parameter of 0.3. This merging strategy aims to consolidate the strengths of these individual models into a single, more robust entity.

Key Characteristics

  • Parameter Count: 24 billion parameters.
  • Merge Method: Utilizes the DARE TIES method for combining models.
  • Constituent Models: Integrates capabilities from four distinct 24B models, including contributions from prominent community architects like TheDrummer and TroyDoesAI.

Intended Use

This model is suitable for applications requiring a versatile language model that benefits from the combined expertise embedded in its merged components. Its architecture suggests a balanced performance across various NLP tasks, rather than specialization in a single domain.