Athenea-4B-Thinking: A General-Purpose Reasoning Model
Athenea-4B-Thinking, developed by Aquiles-ai, is a fine-tuned version of Huihui-Qwen3-4B-Thinking-2507-abliterated. This 4 billion parameter model is engineered as a general-purpose reasoning model, adept at handling a wide array of tasks including mathematical, multilingual, and conversational reasoning.
Key Capabilities
- Step-by-step reasoning: Utilizes explicit
<think> and </think> traces for structured thought processes. - Multidomain reasoning: Performs effectively across analytical and conversational contexts, covering math, language, and logic.
- Multilingual understanding: Capable of processing and generating responses in multiple languages.
- Uncensored output: Built on an abliterated base, providing unrestricted expressive freedom for research and experimentation.
- Improved logical consistency: Benefits from focused fine-tuning on high-quality reasoning data, specifically the Aquiles-ai/Athenea-40k dataset.
- Open inference compatibility: Works seamlessly with popular frameworks like Transformers and vLLM, supporting
flash_attention_2 for optimized performance.
Good For
- Research and experimentation: Ideal for exploring advanced reasoning capabilities in an uncensored environment.
- Developing specialized reasoning agents: Serves as a foundational generalist model for further fine-tuning into domain-specific variants.
- Applications requiring logical consistency: Suitable for tasks where structured, step-by-step thought processes are beneficial.
- Multilingual AI solutions: Its multilingual understanding makes it versatile for global applications.