dipta007/GanitLLM-1.7B-SFT is a 1.7 billion parameter causal language model developed by dipta007, based on Qwen3-1.7B. It is specifically fine-tuned using Supervised Fine-Tuning on the GANIT dataset to excel in Bengali mathematical reasoning tasks. This model demonstrates significant improvements in accuracy on Bengali math benchmarks and generates more concise solutions compared to its base model.
No reviews yet. Be the first to review!