catchshubham/qwen3-8b-ncert-finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Feb 1, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The catchshubham/qwen3-8b-ncert-finetuned model is an 8 billion parameter Qwen3-based causal language model developed by catchshubham. Fine-tuned from unsloth/qwen3-8b-unsloth-bnb-4bit, it was trained 2x faster using Unsloth. This model is designed for general language tasks, leveraging its Qwen3 architecture and efficient training methodology.

Loading preview...