The ertghiu256/qwen3-1.7b-mixture-of-thought is a 2 billion parameter Qwen 3 model, fine-tuned on 20k conversations from `open-r1/Mixture-of-Thoughts` and 3k conversations from `mlabonne/FineTome-100k`. This model is specifically enhanced for reasoning capabilities and designed to run efficiently on resource-constrained devices like smartphones or older laptops. It features a 40960 token context length and supports both 'thinking' and 'non-thinking' modes for response generation.
Loading preview...
Overview
This model, ertghiu256/qwen3-1.7b-mixture-of-thought, is a 2 billion parameter variant of the Qwen 3 architecture. It has been specifically fine-tuned to enhance its reasoning capabilities by training on a combination of 20,000 conversations from open-r1/Mixture-of-Thoughts and 3,000 conversations from mlabonne/FineTome-100k datasets. A key design goal for this model is its efficiency, making it suitable for deployment on devices with limited computational resources, such as smartphones or older laptops.
Key Capabilities & Features
- Enhanced Reasoning: Specialized training on 'Mixture-of-Thoughts' datasets improves its ability to process and generate reasoned responses.
- Resource-Efficient: Optimized for deployment on less powerful hardware.
- Flexible Output Modes: Supports both a 'thinking' mode, where it can output internal thought processes, and a standard 'non-thinking' mode.
- Extensive Context Window: Features a substantial context length of 40960 tokens.
- Broad Compatibility: Can be run using various inference frameworks including Hugging Face Transformers, vLLM, SGLang, llama.cpp, Ollama, and LM Studio.
Recommended Use Cases
This model is particularly well-suited for applications requiring:
- Reasoning-intensive tasks on edge devices or systems with limited memory/compute.
- Interactive applications where a model's thought process might be beneficial for debugging or user understanding.
- Local deployment on personal devices where larger models are impractical.