laion/sera-316-opt1k__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:otherArchitecture:Transformer Cold
The laion/sera-316-opt1k__Qwen3-8B model is an 8 billion parameter language model, fine-tuned from Qwen/Qwen3-8B. It was trained on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--allenai-sera-unified-316/snapshots/ef551d7ec9bb11780e15657490451a6fc6842c46_thinking_preprocessed dataset. This model is optimized for tasks related to the specific dataset it was fine-tuned on, offering specialized performance for its intended domain.
Loading preview...