Emperorizzis/ASTRA-32B-Thinking-v1
TEXT GENERATIONConcurrency Cost:2Model Size:32BQuant:FP8Ctx Length:32kPublished:Jan 21, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
Emperorizzis/ASTRA-32B-Thinking-v1 is a 32 billion parameter language model derived from Qwen3-32B, specifically optimized for multi-step, tool-augmented tasks. It features enhanced agentic capabilities for complex tool use and structured reasoning, with a context length of 32768 tokens. This model excels in scenarios requiring automated synthesis of agentic trajectories and reinforcement learning, achieving state-of-the-art performance on the BFCL-V3 multi-turn subset.
Loading preview...