qingy2024/GRMR-2B-Instruct-old

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2.6BQuant:BF16Ctx Length:8kPublished:Dec 11, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The qingy2024/GRMR-2B-Instruct-old is a 2.6 billion parameter instruction-tuned language model developed by qingy2024, fine-tuned from unsloth/gemma-2-2b-bnb-4bit. This model specializes in grammar correction, designed to take any input text and repeat it with fixed grammatical errors. It offers an 8192-token context length, making it suitable for processing moderately sized text inputs for grammar refinement tasks.

Loading preview...

GRMR-2B-Instruct-old: Grammar Correction Model

The GRMR-2B-Instruct-old model, developed by qingy2024, is a 2.6 billion parameter language model specifically fine-tuned for grammar correction. It is based on the unsloth/gemma-2-2b-bnb-4bit architecture and has undergone 300 steps of fine-tuning to enhance its ability to identify and correct grammatical inconsistencies in text.

Key Capabilities

  • Grammar Correction: The primary function of this model is to receive input text and output a grammatically corrected version.
  • Instruction-Tuned: It is designed to follow instructions for text rephrasing with a focus on grammatical accuracy.
  • Base Model: Fine-tuned from Gemma 2 2B, leveraging its foundational language understanding.

Example Use Case

Consider the following example demonstrating its grammar correction capability:

User Input: "Find a clip from a professional production of any musical within the past 50 years. The Tony awards have a lot of great options of performances of Tony nominated performances in the archives on their websites."

Model Output: "Find a clip from a professional production of any musical within the past 50 years. The Tony Awards have a lot of great options of performances of Tony-nominated performances in their archives on their websites."

This model is particularly useful for applications requiring automated text refinement and grammatical accuracy.