allenai/llama-3.1-tulu-2-dpo-70b
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Aug 9, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

allenai/llama-3.1-tulu-2-dpo-70b is a 70 billion parameter language model developed by AllenAI, fine-tuned from Meta-Llama-3.1-70B. It is designed as a helpful assistant, trained on a diverse mix of public, synthetic, and human datasets, and further aligned using DPO on the UltraFeedback dataset. This model excels in instruction following and general conversational tasks, offering improved truthfulness and safety compared to its base model.

Loading preview...