sarthakrastogi/narasimha-b-0.6b
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The sarthakrastogi/narasimha-b-0.6b is a 0.8 billion parameter causal language model, fine-tuned by sarthakrastogi from the Qwen3-0.6B architecture. It was trained for one epoch on the Narasimha dataset, specializing its responses based on this specific data. With a context length of 40960 tokens, this model is designed for applications requiring focused generation within its fine-tuned domain.

Loading preview...