qingy2024/GRMR-2B-Instruct
The qingy2024/GRMR-2B-Instruct is a 2.6 billion parameter instruction-tuned language model developed by qingy2024, fine-tuned from unsloth/gemma-2-2b-bnb-4bit. This model is specifically designed to take input text and rewrite it with corrected grammar, improved clarity, and enhanced readability. With an 8192-token context length, it excels at grammar correction tasks, making it suitable for applications requiring polished text output.
Loading preview...
Overview
The qingy2024/GRMR-2B-Instruct is a specialized 2.6 billion parameter language model, fine-tuned from the unsloth/gemma-2-2b-bnb-4bit architecture. Its primary function is to act as a grammar and clarity enhancer, taking any given text and returning a grammatically corrected and more readable version. The model operates with an 8192-token context length, allowing it to process substantial text inputs for refinement.
Key Capabilities
- Grammar Correction: Identifies and rectifies grammatical errors in input text.
- Clarity Improvement: Enhances the overall clarity and coherence of sentences.
- Readability Enhancement: Adjusts text to improve its flow and ease of understanding.
- Instruction-Tuned: Optimized for specific text rewriting tasks based on a custom chat template.
Usage Recommendations
For optimal results, it is recommended to use a temperature of 0.0 and a repeat_penalty of 1.0 when interacting with this model. The model utilizes a specific chat template for input, guiding it to process original text and output a corrected version. An example demonstrates its ability to refine sentences, correcting capitalization and hyphenation for improved quality.