Overview
UniLLMer/CasAuTabom24BcmlKaajtmentKaa12816 is a 24 billion parameter language model developed by UniLLMer. It is a finetuned version of the Casual-Autopsy/The-True-Abomination-24B model, built upon the Mistral architecture. The training process involved a unique blend of datasets, including ShareGPT-derived chatlogs, Alpacatized instructions, and elements related to mental psychology.
Key Characteristics
- Base Model: Mistral architecture, finetuned from Casual-Autopsy/The-True-Abomination-24B.
- Parameter Count: 24 billion parameters.
- Training Methodology: Utilizes a "KAA mix" of diverse chat data and instruction formats, along with mental psychology concepts.
- Efficiency: Training was accelerated using Unsloth and Huggingface's TRL library, achieving 2x faster finetuning.
- License: Released under the Apache-2.0 license.
Potential Use Cases
This model's unique finetuning approach, incorporating varied conversational data and psychological elements, suggests it may be particularly suited for applications requiring nuanced conversational understanding, role-playing, or generating responses that reflect specific psychological states or conversational dynamics. Its efficient training with Unsloth indicates a focus on practical deployment.