DCAgent/a1-wizardlm_orca
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 26, 2026License:otherArchitecture:Transformer Cold

DCAgent/a1-wizardlm_orca is an 8 billion parameter language model fine-tuned from Qwen/Qwen3-8B. This model is specifically optimized through supervised fine-tuning on the wizardlm-orca-sandboxes_glm_4.7_traces_jupiter dataset. It is designed for tasks requiring nuanced understanding and generation based on complex conversational traces, leveraging its 32768 token context length for detailed interactions.

Loading preview...