The tartuNLP/Llammas-base-p1-GPT-4o-human-error-mix-paragraph-GEC model is a 7 billion parameter language model developed by tartuNLP. With a context length of 4096 tokens, this model is specifically fine-tuned for Grammatical Error Correction (GEC) tasks, leveraging a mix of GPT-4o and human-generated error data. Its primary use case is to identify and correct grammatical errors in paragraphs, making it suitable for applications requiring high-quality text refinement.
No reviews yet. Be the first to review!