omrisap/nemotron-7B-12K
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Mar 29, 2026Architecture:Transformer Cold

The omrisap/nemotron-7B-12K is a 7.6 billion parameter language model with a 32768 token context length. This model is shared by omrisap and is based on the Nemotron architecture. While specific training details and differentiators are not provided in the available information, its parameter count and context window suggest it is designed for general language understanding and generation tasks requiring processing of longer inputs.

Loading preview...