LorenaYannnnn/general_reward-Qwen3-0.6B-OURS_llama-seed_1
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Mar 18, 2026Architecture:Transformer Warm
The LorenaYannnnn/general_reward-Qwen3-0.6B-OURS_llama-seed_1 is an 0.8 billion parameter language model, likely based on the Qwen3 architecture, with a context length of 32768 tokens. This model is a general reward model, indicating its primary function is to evaluate and provide feedback on generated text, rather than directly generating content. Its specific differentiators and optimal use cases are not detailed in the provided information.
Loading preview...