CognitiveKernel/Qwen3-8B-CK-Pro is an 8 billion parameter language model developed by Cognitive Kernel, fine-tuned using self-collected trajectories. This model is specifically optimized for complex reasoning tasks, achieving 32.7% Pass@1 on the GAIA benchmark's full dev set. It excels in scenarios requiring deep research and agentic capabilities, particularly on text-only subsets where it scores 40.3% Pass@1.
Loading preview...
Overview
CognitiveKernel/Qwen3-8B-CK-Pro is an 8 billion parameter model developed by Cognitive Kernel, as detailed in their paper "Cognitive Kernel-Pro: A Framework for Deep Research Agents and Agent Foundation Models Training." This model is distinguished by its fine-tuning process, which utilizes self-collected trajectories derived from specific queries, enhancing its ability to handle complex, multi-step reasoning.
Key Capabilities
- Advanced Reasoning: Optimized for deep research and agentic tasks, making it suitable for applications requiring sophisticated problem-solving.
- Benchmark Performance: Achieves a Pass@1 score of 32.7% and Pass@3 score of 38.2% on the full GAIA development set. For text-only subsets of GAIA, it demonstrates even stronger performance with 40.3% Pass@1 and 49.3% Pass@3.
- Context Length: Supports a substantial context window of 32768 tokens, enabling it to process and reason over extensive inputs.
Good For
- Deep Research Agents: Ideal for building AI agents that require the ability to perform in-depth research and synthesize information.
- Complex Problem Solving: Suited for use cases demanding high-level reasoning and the ability to navigate intricate problem spaces.
- Text-Based Analysis: Particularly effective for tasks that primarily involve processing and understanding textual information, as indicated by its strong performance on text-only GAIA subsets.