laion/Kimi-K2T-neulab-agenttuning-webshop-sandboxes-maxeps-32k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The laion/Kimi-K2T-neulab-agenttuning-webshop-sandboxes-maxeps-32k model is a fine-tuned 8 billion parameter language model based on Qwen/Qwen3-8B. It was specifically adapted using the penfever/Kimi-K2T-neulab-agenttuning-webshop-sandboxes-maxeps-32k dataset, suggesting an optimization for agent-based tasks within webshop sandbox environments. With a context length of 32,768 tokens, this model is designed for processing extensive conversational or transactional histories relevant to its specialized fine-tuning domain.
Loading preview...