The allenai/Llama-3.1-Tulu-3-8B-SFT is an 8 billion parameter instruction-following model developed by Allen Institute for AI, fine-tuned from Meta's Llama 3.1 base model. It is part of the Tülu3 family, which provides fully open-source data, code, and recipes for post-training techniques. This model is designed for strong performance across diverse tasks, including chat, mathematical reasoning (MATH, GSM8K), and instruction following (IFEval), with a context length of 32768 tokens.
No reviews yet. Be the first to review!