formulae/mita-v1.0-7b-2-24-2025

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 24, 2025Architecture:Transformer0.0K Warm

Formulae/MITA-V1.0-7B-2-24-2025 is the first-generation 7 billion parameter MITA model, developed by Formulae using the TIES merging method. This general-purpose, uncensored model combines multiple fine-tuned models to achieve balanced intelligence. It serves as a foundational model for future Mixture of Experts (MoE) developments, offering strong reasoning capabilities.

Loading preview...

Formulae/MITA-V1.0-7B-2-24-2025 Overview

Formulae/MITA-V1.0-7B is the initial release in the MITA series, designed as a general-purpose, uncensored language model. This 7 billion parameter model is built using the TIES (Trim, Elect Sign & Merge) method, which combines parameters from multiple fine-tuned models to create a robust and balanced generalist. It leverages a DeepSeek-R1-Distill-Qwen-7B base model, merging it with Human-Like-Qwen2.5-7B-Instruct and Qwen2.5-7B-nerd-uncensored-v0.9.

Key Capabilities

  • General-Purpose Intelligence: Offers balanced performance across various tasks.
  • Uncensored Outputs: Designed for open and unrestricted conversational applications.
  • Strong Reasoning: Maintains logical coherence in its responses.
  • TIES Merging: Utilizes an advanced merging technique to minimize parameter interference and preserve valuable learned features from constituent models.

Good For

  • Foundational AI Research: Serves as an experimental base for exploring future MoE architectures.
  • Open-ended Conversational Agents: Suitable for applications requiring unrestricted dialogue.
  • General Text Generation: Can be used for a wide array of tasks where a balanced, non-specialized model is preferred.

Limitations

While versatile, MITA-V1.0-7B does not specialize in specific domains like coding or mathematics. As an uncensored model, users should exercise caution and verify outputs for accuracy and ethical considerations.