Naphula-Archives/Piranha-12B-v1a

TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:May 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Piranha-12B-v1a by Naphula-Archives is a 12 billion parameter Mistral-based language model with a 32768 token context length. This model is a prototype merge, noted for its high creativity despite exhibiting some censorship. It is designed for general text generation tasks where creative output is prioritized.

Loading preview...

Piranha-12B-v1a: A Creative Prototype

Piranha-12B-v1a is a 12 billion parameter language model developed by Naphula-Archives, built upon the MistralForCausalLM architecture. This model is a prototype merge, utilizing components from several 12B models including EldritchLabs' MN-12B-RP-Ink-Longform-MPOA and Human-Like-Mistral-Nemo-Instruct-2407-MPOA, as well as MuXodious-Rocinante-X-12B-v1. It was created using a karcher_stock merge method.

Key Characteristics

  • Architecture: MistralForCausalLM base.
  • Parameter Count: 12 billion parameters.
  • Context Length: Supports a 32768 token context window.
  • Creativity: Noted for its high creative output capabilities.
  • Merge Method: Utilizes a karcher_stock merge, combining multiple 12B models.

Intended Use Cases

This model is particularly suited for applications requiring highly creative text generation. While it is a prototype and exhibits some censorship despite its components, its strength lies in generating imaginative and diverse content. Developers seeking a model with strong creative potential for tasks like story generation, brainstorming, or unconventional content creation may find Piranha-12B-v1a useful.