Kukedlc/NeuralMaxime-7B-DPO
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 19, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Kukedlc/NeuralMaxime-7B-DPO is a 7 billion parameter language model developed by Kukedlc, fine-tuned using Direct Preference Optimization (DPO). This model is a merge of NeuralMonarch and AlphaMonarch, leveraging the DPO Intel - Orca methodology. It is designed for general language generation tasks, offering a 4096-token context window.

Loading preview...