laion/allenai-sera-unified-100000-opt100k__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 1, 2026License:otherArchitecture:Transformer Cold

The laion/allenai-sera-unified-100000-opt100k__Qwen3-8B model is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. It was trained on the laion/allenai-sera-unified-100000 dataset, suggesting a specialization in areas related to the SERA project's data. This model is likely optimized for tasks aligned with the specific data distribution of its fine-tuning dataset, offering enhanced performance in those domains.

Loading preview...