laion/Kimi-K2T-neulab-agenttuning-mind2web-sandboxes-maxeps-32k
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 18, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The laion/Kimi-K2T-neulab-agenttuning-mind2web-sandboxes-maxeps-32k model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the penfever/Kimi-K2T-neulab-agenttuning-mind2web-sandboxes-maxeps-32k_neulab-agenttuning-db-sandboxes dataset. This model is specifically adapted for tasks related to agent tuning within Mind2Web sandboxes, leveraging its 32k token context length.

Loading preview...