wphuirtp/paper_helper
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Feb 5, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
wphuirtp/paper_helper is a 14.8 billion parameter Qwen2-based language model, fine-tuned by wphuirtp from unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit. It was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training. The model's training data includes content from books on harmonic maps and geometric analysis, alongside the unsloth/OpenMathReasoning-mini dataset, suggesting a specialization in mathematical reasoning and complex analytical topics. It features a substantial context length of 131072 tokens, making it suitable for processing extensive documents.
Loading preview...