laion/allenai-sera-unified-3160__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 25, 2026License:otherArchitecture:Transformer Cold
The laion/allenai-sera-unified-3160__Qwen3-8B model is an 8 billion parameter language model, fine-tuned from the Qwen/Qwen3-8B architecture. This model was specifically fine-tuned on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--allenai-sera-unified-3160/snapshots/099497cdf98a9c3da57ca8873d9d734da4be1361_thinking_preprocessed dataset. With a context length of 32768 tokens, it is optimized for tasks related to the specific data it was trained on, making it suitable for applications requiring specialized knowledge from that dataset.
Loading preview...