DCAgent/a1-self_instruct_naive
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:otherArchitecture:Transformer Cold

DCAgent/a1-self_instruct_naive is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B, featuring a 32,768 token context length. This model is specifically trained on the DCAgent/selfinstruct-naive-sandboxes-2_10k_glm_4.7_traces_jupiter dataset. It is designed for tasks related to self-instruction and sandbox trace processing, leveraging its Qwen3-8B base for general language understanding.

Loading preview...