allenai/tulu-v2.5-dpo-13b-nectar
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Jun 11, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

allenai/tulu-v2.5-dpo-13b-nectar is a 13 billion parameter language model developed by AllenAI, fine-tuned from Llama-2-13b-hf. It is part of the Tulu V2.5 series, specifically trained using DPO (Direct Preference Optimization) on the Nectar dataset to function as a helpful assistant. This model is optimized for generating aligned, preference-based responses in English, building on a mix of publicly available, synthetic, and human-created datasets.

Loading preview...