Kukedlc/NeuralAlgo-7B-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Mar 31, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

Kukedlc/NeuralAlgo-7B-DPO is a 7 billion parameter language model developed by Kukedlc, fine-tuned using Direct Preference Optimization (DPO). This model is designed for general language understanding and generation tasks, leveraging its 4096-token context length. Its DPO fine-tuning aims to align its outputs more closely with human preferences, making it suitable for conversational AI and instruction-following applications.

Loading preview...