laion/r2egym-unified-316__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:otherArchitecture:Transformer Cold
The laion/r2egym-unified-316__Qwen3-8B is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. It was trained on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--r2egym-unified-316/snapshots/7ca94a2abcbab7f0c392f62ec288691cdea20260_thinking_preprocessed dataset. This model is designed for general language tasks, leveraging its 32768 token context length for processing extensive inputs.
Loading preview...