camel-ai/seta-rl-qwen3-8b
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 8, 2026Architecture:Transformer0.0K Cold

The camel-ai/seta-rl-qwen3-8b is an 8 billion parameter Qwen3 model developed by CAMEL-AI as part of their Scaling Environments for Agents (SETA) project. This model is specifically fine-tuned for Reinforcement Learning (RL) within scalable terminal environments. It is designed to facilitate the training and operation of agents in terminal-based tasks, leveraging its 32768-token context length for complex interactions.

Loading preview...