NFT-32B is a 32.5 billion parameter math reasoning model developed by NVIDIA, Tsinghua University, and Stanford University. Fine-tuned from Qwen2.5-32B using the Negative-aware Fine-Tuning (NFT) algorithm, it learns from both correct and incorrect answers to autonomously improve performance. This model excels at competition-level mathematics and general mathematical reasoning, supporting a context length of up to 131,072 tokens.
No reviews yet. Be the first to review!