laion/nemotron-100000-opt100k__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 30, 2026License:otherArchitecture:Transformer Cold

The laion/nemotron-100000-opt100k__Qwen3-8B model is a fine-tuned version of the Qwen3-8B architecture, featuring 8 billion parameters and a 32K context length. This model has been specifically adapted using the laion/nemotron-terminal-corpus-unified-100000 dataset. It is designed for general language understanding and generation tasks, leveraging its fine-tuned training for improved performance in specific applications.

Loading preview...