allenai/tulu-v2.5-dpo-13b-argilla-orca-pairs
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Jun 11, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The allenai/tulu-v2.5-dpo-13b-argilla-orca-pairs model is a 13 billion parameter language model developed by AllenAI, fine-tuned from Llama-2-13b-hf. It is part of the Tulu V2.5 series, trained using DPO (Direct Preference Optimization) on the Argilla cleaned version of the Intel Orca DPO pairs dataset. This model is designed to function as a helpful assistant, excelling in instruction-following tasks.

Loading preview...