luepow/thau-7b
THAU 7B is a 7.6 billion parameter instruction-tuned causal language model developed by luepow, based on Qwen2.5-7B-Instruct. It is specialized in cognitive reasoning, code generation across multiple languages, and autonomous agent capabilities including tool calling. The model supports a context length of 4096 tokens and excels in tasks requiring structured problem-solving and financial accounting.
Loading preview...
THAU 7B: Cognitive AI Assistant
THAU 7B is a 7.6 billion parameter model fine-tuned from Qwen2.5-7B-Instruct, developed by luepow. It is designed to function as a cognitive AI assistant, emphasizing capabilities in reasoning, code generation, and autonomous agent functionalities. The model was trained using LoRA (r=16, alpha=32) and supports a context length of 4096 tokens.
Key Capabilities
- Code Generation: Full support for various programming languages including Python, JavaScript, Java, Rust, Go, and SQL.
- Cognitive Reasoning: Excels in Chain of Thought processes and step-by-step problem-solving.
- Autonomous Agents: Features full support for JSON-based tool calling (MCP) and SVG generation.
- Specialized Domains: Proficient in Accounting and Finance, including double-entry bookkeeping and IFRS.
- Multilingual: Supports both English and Spanish.
Training and Limitations
THAU 7B was trained on 677 unique examples across 8 categories, focusing on programming, reasoning, DevOps, and accounting. While strong in its specialized areas, it does not possess vision or multimodal capabilities and relies on prompting for Chain of Thought rather than internal thinking tokens. Its performance on complex tasks is sensitive to prompt engineering.
Good for
- Developers needing a model for multi-language code generation.
- Applications requiring structured reasoning and problem-solving.
- Building autonomous agents with tool-calling functionality.
- Tasks involving financial analysis or accounting processes.