Danielbrdz/Barcenas-Tiny-1.1b-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kPublished:Jan 20, 2024License:apache-2.0Architecture:Transformer Open Weights Warm

The Danielbrdz/Barcenas-Tiny-1.1b-DPO is a 1.1 billion parameter causal language model, based on the TinyLlama/TinyLlama-1.1B-Chat-v1.0 architecture. It has been fine-tuned using Direct Preference Optimization (DPO) on the Intel/orca_dpo_pairs dataset. This model aims to enhance response quality and overall performance within a compact size, making it accessible for various applications requiring efficient, small-scale language processing.

Loading preview...