UniLLMer/CasAuTabom24BcmlKaajtmentKaa12816

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

UniLLMer/CasAuTabom24BcmlKaajtmentKaa12816 is a 24 billion parameter Mistral-based language model developed by UniLLMer, fine-tuned from Casual-Autopsy/The-True-Abomination-24B. It incorporates a mix of ShareGPT chatlogs, Alpacatized instructions, and mental psychology concepts. This model was trained using Unsloth and Huggingface's TRL library, focusing on specific finetuning techniques.

Loading preview...

Overview

UniLLMer/CasAuTabom24BcmlKaajtmentKaa12816 is a 24 billion parameter language model developed by UniLLMer. It is a finetuned version of the Casual-Autopsy/The-True-Abomination-24B model, built upon the Mistral architecture. The training process involved a unique blend of datasets, including ShareGPT-derived chatlogs, Alpacatized instructions, and elements related to mental psychology.

Key Characteristics

  • Base Model: Mistral architecture, finetuned from Casual-Autopsy/The-True-Abomination-24B.
  • Parameter Count: 24 billion parameters.
  • Training Methodology: Utilizes a "KAA mix" of diverse chat data and instruction formats, along with mental psychology concepts.
  • Efficiency: Training was accelerated using Unsloth and Huggingface's TRL library, achieving 2x faster finetuning.
  • License: Released under the Apache-2.0 license.

Potential Use Cases

This model's unique finetuning approach, incorporating varied conversational data and psychological elements, suggests it may be particularly suited for applications requiring nuanced conversational understanding, role-playing, or generating responses that reflect specific psychological states or conversational dynamics. Its efficient training with Unsloth indicates a focus on practical deployment.