Xtra-Computing/XtraGPT-1.5B
Xtra-Computing/XtraGPT-1.5B is a 1.5 billion parameter language model, part of the XtraGPT family, specifically fine-tuned for human-AI collaborative academic paper revision. Based on Qwen/Qwen2.5-1.5B-Instruct, it processes full paper context and follows criteria-guided revision instructions. This model excels at providing consistent and controllable improvements to academic texts, supporting an iterative workflow for authors.
Loading preview...
XtraGPT-1.5B: Context-Aware Academic Paper Revision
XtraGPT-1.5B is a 1.5 billion parameter model, part of the XtraGPT family developed by Xtra-Computing, designed for human-AI collaborative academic paper revision. Unlike general-purpose LLMs, this model is specifically fine-tuned to understand the full context of a research paper and execute precise, criteria-guided revision instructions. It is built upon the Qwen/Qwen2.5-1.5B-Instruct architecture and trained on a dataset of 140,000 high-quality instruction-revision pairs derived from top-tier conference papers.
Key Capabilities
- Context-Aware Revision: Processes the entire paper to ensure revisions maintain consistency with the global narrative and context.
- Controllable Output: Follows specific user instructions aligned with 20 academic writing criteria across 6 paper sections (e.g., Abstract, Introduction).
- Human-AI Collaboration: Supports an iterative workflow where authors retain creative control over the revision process.
Good For
- Academic researchers and authors seeking AI assistance for refining their papers.
- Automating and standardizing academic writing improvements based on specific criteria.
- Integrating AI into a human-in-the-loop paper revision pipeline.
This model is released under the highly permissive ModelGo Zero License 2.0 (MG0-2.0), allowing for unrestricted use, reproduction, distribution, and creation of derivative works, including for commercial purposes, without attribution or copyleft restrictions.