Dogoo3/Aletheia-12B
Aletheia-12B is a 12 billion parameter language model developed by Dogoo3, created through a merge of several existing models including MN-HyperNovaIrix-12B, Violet-Lyra-Gutenberg-v2, AngelSlayer-12B, FusionEngine-12B, and Patricide-12B. This model was specifically designed as an experimental attempt to achieve a more intelligent and creative 12B model, offering a 32768 token context length. Its primary use case is for creative text generation and general intelligent conversational tasks, with a focus on enhancing imaginative outputs.
Loading preview...
Aletheia-12B Overview
Aletheia-12B is a 12 billion parameter language model developed by Dogoo3, resulting from an experimental merge of five distinct models: MN-HyperNovaIrix-12B, Violet-Lyra-Gutenberg-v2, AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS, yamatazen/FusionEngine-12B-Lorablated, and redrix/patricide-12B-Unslop-Mell. This merge was conducted using mergekit with the goal of creating a more intelligent and creative model for personal use, which has now been released to the community.
Key Capabilities
- Enhanced Creativity: Designed to produce more imaginative and creative text outputs.
- Intelligent Responses: Aims for higher intelligence in conversational and generative tasks.
- Flexible Merging: Built from multiple base models, allowing for diverse underlying strengths.
- Extended Context: Supports a context length of 32768 tokens, suitable for longer interactions.
Good For
- Creative Writing: Ideal for generating imaginative stories, scripts, or descriptive text.
- Experimental Merges: Serves as a base or component for further model merging experiments.
- Personalized AI Applications: Suitable for users seeking a highly customizable and creative 12B model.
- General Conversational AI: Can be used for various dialogue-based applications where creativity is valued.
Known Issues
- May exhibit occasional refusals.
- Can sometimes be repetitive in generated text.
- May experience issues with user role retention (talking as {user}) after approximately 10,000 context tokens.