m-polignano/ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA

TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Jul 25, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA by m-polignano is a 24 billion parameter multilingual (English and Italian) large language model built on the Mistral architecture, fine-tuned from dphn/Dolphin-Mistral-24B-Venice-Edition. This model is specifically designed as an uncensored 'Thinking Model' within the ANITA family, offering a 32768 token context length. It is optimized for research purposes requiring a model with fewer ethical and safety constraints.

Loading preview...

ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA Overview

This model, developed by Ph.D. Marco Polignano and the SWAP Research Group, is a 24 billion parameter "Thinking Model" within the ANITA (Advanced Natural-based interaction for the ITAlian language) family. It is a fine-tuned version of dphn/Dolphin-Mistral-24B-Venice-Edition, which itself is based on the Mistral architecture. A key characteristic is its uncensored nature, making it a multilingual model supporting both English and Italian, with the explicit warning that it may exhibit dangerous, unethical, or offensive behaviors.

Key Capabilities

  • Uncensored Responses: Designed to generate content without typical safety filters, suitable for specific research applications.
  • Multilingual Support: Proficient in both English and Italian.
  • Mistral Architecture: Leverages the efficient and capable Mistral base.
  • Extended Context Window: Supports a context length of 32768 tokens, though performance may degrade after 40k tokens.
  • QLoRA and DPO Fine-tuning: Utilizes QLoRA 4-bit for supervised fine-tuning on instruction-based datasets and DPO over mlabonne/orpo-dpo-mix-40k for alignment.

Good For

  • Research Purposes: Specifically released for research into uncensored model behaviors and capabilities.
  • Italian Language Use Cases: Part of the ANITA project aimed at improving NLP for the Italian language.
  • Exploring Model Limitations: Useful for studying the outputs of models without strict safety alignments.