laion/Kimi-K2T-neulab-agenttuning-kg-sandboxes-maxeps-32k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 18, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

Kimi-K2T-neulab-agenttuning-kg-sandboxes-maxeps-32k is an 8 billion parameter language model developed by laion, fine-tuned from Qwen/Qwen3-8B. This model was specifically trained on the penfever/Kimi-K2T-neulab-agenttuning-kg-sandboxes-maxeps-32k_neulab-agenttuning-kg-sandboxes dataset, featuring a 32K token context length. Its fine-tuning process suggests an optimization for tasks related to agent tuning, knowledge graphs, and sandbox environments.

Loading preview...