KissanAI/ThinkingDhenu1-CRSA-India-preview
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:May 14, 2025Architecture:Transformer0.0K Warm

KissanAI/ThinkingDhenu1-CRSA-India-preview is an experimental research preview model based on the Qwen3 decoder-only causal-LM architecture with a 32k context length. Developed by KissanAI, it is specifically fine-tuned for Climate-Resilient and Sustainable Agriculture (CRSA) recommendations tailored to Indian conditions. This model excels at providing agronomic advice, particularly for organic practices, climate-smart cropping, and pest/soil management, utilizing a unique "chain-of-thought + answer" training format.

Loading preview...

Overview

KissanAI/ThinkingDhenu1-CRSA-India-preview is an experimental research preview model developed by KissanAI, based on the Qwen/Qwen3-4B architecture. It is a decoder-only causal-LM with a 32k context window, fine-tuned using Supervised Fine-Tuning (SFT) via llama-factory.

Key Capabilities

  • Climate-Resilient and Sustainable Agriculture (CRSA) Recommendations: Provides advice tailored to Indian agricultural conditions, including organic practices (APCNF), climate-smart cropping, integrated pest management (IPM), and soil/nutrient management.
  • Agronomic Query Answering: Functions as a decision-support micro-service for agronomic questions.
  • Content Generation: Can generate material for agricultural extension services.
  • Specialized Training: Utilizes the KissanAI/Thinking-climate-100k dataset, consisting of 101k multi-turn dialogues with a unique "chain-of-thought + answer" format to separate private reasoning from the final answer.

Intended Use Cases

This model is primarily intended to assist farmers, agronomists, and ag-tech developers with CRSA recommendations for Indian agriculture. It is suitable for decision-support systems and generating educational content. It is important to note that the model may embed agronomic bias towards Indian Natural Farming practices and its climate data is static (2024), requiring validation against current advisories. High-stakes advice should always be validated by qualified professionals due to potential LLM hallucinations.