MRockatansky/Cogidonia-24B

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026Architecture:Transformer0.0K Cold

MRockatansky/Cogidonia-24B is a fine-tuned language model developed by MRockatansky, based on an unspecified 24 billion parameter architecture. This model has been trained using the TRL library, indicating a focus on reinforcement learning from human feedback or similar fine-tuning methods. Its primary application is text generation, as demonstrated by its quick start example for answering open-ended questions.

Loading preview...

Model Overview

MRockatansky/Cogidonia-24B is a fine-tuned language model developed by MRockatansky. It is based on an existing 24 billion parameter model, though the specific base architecture is not detailed in the provided information. The model's training leveraged the TRL library, which is commonly used for advanced fine-tuning techniques such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF).

Key Training Details

  • Fine-tuning Method: The model was trained using Supervised Fine-Tuning (SFT).
  • Frameworks: Training utilized PEFT 0.18.1, TRL 0.23.1, Transformers 5.5.0, Pytorch 2.9.1+cu128, Datasets 4.3.0, and Tokenizers 0.22.1.

Primary Use Case

This model is designed for text generation tasks, as exemplified by its quick start guide for answering open-ended questions. Developers can integrate it into their applications using the Hugging Face pipeline for text generation.