laion/r2egym-316-opt1k__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:otherArchitecture:Transformer Cold

The laion/r2egym-316-opt1k__Qwen3-8B model is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. It was trained on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--r2egym-unified-316 dataset, suggesting a specialization in tasks related to the 'r2egym-unified-316' domain. This model is optimized for specific applications within its fine-tuning dataset's scope, offering a focused alternative to general-purpose LLMs.

Loading preview...