ParetoQaft/8B-Tulu-full
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 10, 2026License:llama3.1Architecture:Transformer Cold

The ParetoQaft/8B-Tulu-full model is an 8 billion parameter instruction-following model from Allen Institute for AI, fine-tuned from Llama-3.1-8B. It is part of the Tülu3 family, known for its open-source data and post-training techniques. This model is designed for strong performance across diverse tasks, including mathematical reasoning (MATH, GSM8K) and instruction following (IFEval), with a 32768 token context length.

Loading preview...