laion/nemotron-31600-opt100k__Qwen3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026License:otherArchitecture:Transformer Cold

The laion/nemotron-31600-opt100k__Qwen3-8B model is an 8 billion parameter causal language model, fine-tuned from Qwen/Qwen3-8B by laion. This model was trained on the /e/data1/datasets/playground/ot/hf_hub/datasets--laion--nemotron-terminal-corpus-unified-31600 dataset, suggesting a specialization in processing and generating text related to terminal or code-like environments. With a context length of 32768 tokens, it is designed for tasks requiring extensive contextual understanding and generation within its specialized domain.

Loading preview...