ANITA-NEXT-24B-Dolphin-Mistral-UNCENSORED-ITA Overview
This model, developed by Ph.D. Marco Polignano and the SWAP Research Group, is a 24 billion parameter "Thinking Model" within the ANITA (Advanced Natural-based interaction for the ITAlian language) family. It is a fine-tuned version of dphn/Dolphin-Mistral-24B-Venice-Edition, which itself is based on the Mistral architecture. A key characteristic is its uncensored nature, making it a multilingual model supporting both English and Italian, with the explicit warning that it may exhibit dangerous, unethical, or offensive behaviors.
Key Capabilities
- Uncensored Responses: Designed to generate content without typical safety filters, suitable for specific research applications.
- Multilingual Support: Proficient in both English and Italian.
- Mistral Architecture: Leverages the efficient and capable Mistral base.
- Extended Context Window: Supports a context length of 32768 tokens, though performance may degrade after 40k tokens.
- QLoRA and DPO Fine-tuning: Utilizes QLoRA 4-bit for supervised fine-tuning on instruction-based datasets and DPO over
mlabonne/orpo-dpo-mix-40k for alignment.
Good For
- Research Purposes: Specifically released for research into uncensored model behaviors and capabilities.
- Italian Language Use Cases: Part of the ANITA project aimed at improving NLP for the Italian language.
- Exploring Model Limitations: Useful for studying the outputs of models without strict safety alignments.