Quill-v1: A Model for Human-like Creative Writing
Quill-v1 is a 9 billion parameter language model developed by sam-paech, built upon the gemma-2-9b-it base. Its primary distinction lies in its ability to produce human-like writing with a natural cadence, specifically designed to avoid the common "gpt-slop" often found in AI-generated text. This model was fine-tuned using a combination of ORPO and SIMPO training methods.
Key Capabilities
- Classic Literary Style: Quill-v1 was trained extensively on the Gutenberg3 dataset, which consists of late 19th and early 20th-century fiction. This training imbues the model with a distinct, spare prose style, reminiscent of classic literature.
- High Creative Writing Performance: The model achieved a score of 79.75 on the EQ-Bench creative writing benchmark, indicating strong performance in generating engaging and nuanced creative content.
- Reduced AI Artifacts: Through its specialized training on human-authored texts and specific fine-tuning techniques, Quill-v1 aims to minimize typical AI-generated linguistic patterns.
Good For
- Creative Writing: Ideal for generating fiction, short stories, or narrative passages that require a more traditional, human-authored feel.
- Historical Fiction & Period Pieces: Its training data makes it particularly well-suited for content that benefits from a style evocative of earlier literary periods.
- Content Requiring Natural Cadence: Use cases where the natural flow and rhythm of language are paramount, rather than just factual accuracy or conciseness.