waleko/Qwen3-8B-SFT-envbench_gpt5-yellow-green
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
waleko/Qwen3-8B-SFT-envbench_gpt5-yellow-green is a fine-tuned 8 billion parameter language model based on the Qwen3-8B architecture. It has been specialized through supervised fine-tuning on the envbench_gpt5-yellow-green dataset, achieving a loss of 0.4850 and an accuracy of 0.8569 on its evaluation set. This model is primarily intended for tasks aligned with the characteristics of its specific fine-tuning dataset.
Loading preview...