LorenaYannnnn/general_reward-Qwen3-0.6B_7168-OURS_self-seed_0
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Apr 9, 2026Architecture:Transformer Cold

The LorenaYannnnn/general_reward-Qwen3-0.6B_7168-OURS_self-seed_0 is a 0.8 billion parameter causal language model based on the Qwen3 architecture. Developed by LorenaYannnnn, this model is designed as a general reward model, indicating its primary function in evaluating and scoring responses. With a context length of 32768 tokens, it is likely optimized for tasks requiring assessment of longer text sequences.

Loading preview...