nvidia/NFT-32B
TEXT GENERATIONConcurrency Cost:2Model Size:32.8BQuant:FP8Ctx Length:32kPublished:Jun 17, 2025License:nvidia-non-commercial-licenseArchitecture:Transformer0.0K Warm

NFT-32B is a 32.5 billion parameter math reasoning model developed by NVIDIA, Tsinghua University, and Stanford University. Fine-tuned from Qwen2.5-32B using the Negative-aware Fine-Tuning (NFT) algorithm, it learns from both correct and incorrect answers to autonomously improve performance. This model excels at competition-level mathematics and general mathematical reasoning, supporting a context length of up to 131,072 tokens.

Loading preview...