teolm30/fox1.4
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

teolm30/fox1.4 is a 0.5 billion parameter language model, built upon the Qwen2.5-0.5B architecture and merged with a LoRA adapter. This model is specifically trained on combined data from math, logic, knowledge, and code reasoning tasks, making it a specialist in these domains. It demonstrates strong performance on custom reasoning benchmarks, achieving 100% on a 10-question set, and is optimized for tasks requiring logical deduction and problem-solving.

Loading preview...