DCAgent/a1-agenttuning_os
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:otherArchitecture:Transformer Cold
DCAgent/a1-agenttuning_os is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. This model is specifically adapted using the /e/scratch/jureap59/raoof1/sft_data/hf_hub/datasets--DCAgent--neulab-agenttuning-os-sandboxes_glm_4.7_traces_jupiter/snapshots/35dba1baa2452dce3610c03fc7e8567135ed2fd8_thinking_preprocessed dataset. It is designed for tasks related to agent tuning within operating system sandboxes, leveraging its 32768 token context length for processing extensive interaction traces.
Loading preview...