waleko/Qwen3-8B-SFT-envbench_qwen-green-yellow
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The waleko/Qwen3-8B-SFT-envbench_qwen-green-yellow model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. This model has been specifically adapted using the envbench_qwen-green-yellow dataset, achieving an accuracy of 0.9472 on its evaluation set. It is designed for tasks aligned with its fine-tuning data, demonstrating strong performance in environments similar to its training regimen. The model processes a context length of 32768 tokens, making it suitable for applications requiring extensive contextual understanding.
Loading preview...