waleko/Qwen3-8B-SFT-envbench_qwen-all
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The waleko/Qwen3-8B-SFT-envbench_qwen-all is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. This model was specifically trained on the envbench_qwen-all dataset, achieving a loss of 0.1477 and an accuracy of 0.9511 on its evaluation set. It is designed for tasks aligned with its specialized training data, offering a 32768 token context length.
Loading preview...