qingy2024/GRMR-V3-L1B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Jun 3, 2025License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

GRMR-V3-L1B by qingy2024 is a 1 billion parameter Llama 3.2-based causal language model specifically fine-tuned for grammar correction tasks. Optimized for fixing grammatical errors, punctuation, and spelling, it leverages a specialized chat template for clear input/output distinction. This model is designed to improve text quality in English, trained on the qingy2024/grmr-v4-60k dataset.

Loading preview...

GRMR-V3-L1B: Grammar Correction Model

GRMR-V3-L1B is a 1 billion parameter model developed by qingy2024, fine-tuned from unsloth/Llama-3.2-1B for dedicated grammar correction. It excels at identifying and rectifying common language issues in English text.

Key Capabilities

  • Grammar Correction: Fixes grammatical errors, including verb tense, subject-verb agreement, and sentence structure.
  • Punctuation & Spelling: Corrects punctuation mistakes and spelling errors.
  • Text Quality Improvement: Enhances overall clarity and correctness of written content.
  • Specialized Chat Template: Utilizes a unique chat template with text and corrected headers for structured input and output, ensuring clear distinction between original and revised content.

Training and Usage

The model was fine-tuned using full parameter training on the qingy2024/grmr-v4-60k dataset, comprising 60,000 grammar correction examples. It supports a maximum sequence length of 16,384 tokens. For optimal performance, specific sampler settings are recommended, including a temperature of 0.7. While effective for general grammar correction, the model may have limitations with highly technical or domain-specific content and non-standard English.