PokeeAI/pokee_research_7b
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Oct 17, 2025License:apache-2.0Architecture:Transformer0.1K Open Weights Warm

PokeeResearch-7B by Pokee AI is a 7.6 billion parameter tool-augmented LLM research agent, fine-tuned from Qwen2.5-7B-Instruct with a 131072 token context length. It integrates Reinforcement Learning from AI Feedback (RLAIF) and a robust reasoning scaffold to conduct complex, multi-step research workflows including self-correction and synthesis. This model is optimized for deep research automation, autonomously decomposing queries, retrieving external sources, and synthesizing factual, verifiable answers.

Loading preview...