qingy2024/GRMR-V3-Q4B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jun 3, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

GRMR-V3-Q4B by qingy2024 is a 4 billion parameter Qwen3-based language model specifically fine-tuned for grammar correction tasks. It excels at fixing grammatical errors, punctuation, spelling, and improving text clarity in English. The model utilizes a specialized chat template to distinguish between original and corrected content, making it ideal for applications requiring precise text refinement.

Loading preview...

GRMR-V3-Q4B: A Specialized Grammar Correction Model

GRMR-V3-Q4B, developed by qingy2024, is a 4 billion parameter model built upon the unsloth/Qwen3-4B-Base architecture. It has undergone full parameter fine-tuning on the qingy2024/grmr-v4-60k dataset, which comprises 60,000 grammar correction examples, to deliver robust performance in text refinement.

Key Capabilities

  • Grammar Correction: Identifies and rectifies grammatical errors.
  • Punctuation & Spelling: Corrects punctuation and spelling mistakes.
  • Clarity Improvement: Enhances overall sentence structure and text clarity.
  • Specialized Chat Template: Uses a unique template (<|text_start|> for input, <|corrected_start|> for output) for clear distinction between original and corrected text.

Good for

  • Automated proofreading and editing tools.
  • Improving the quality of written English content.
  • Applications requiring precise and consistent grammar correction.

Important Notes

For optimal performance, users should apply specific sampler settings including temperature = 0.7, top_p = 0.95, and top_k = 40. The model's training utilized the Unsloth framework for efficiency and was conducted with a maximum sequence length of 16,384 tokens. While highly effective for general grammar correction, it may have limitations with highly technical or domain-specific content and non-standard English.