chargoddard/loyal-piano-m7-cdpo
The chargoddard/loyal-piano-m7-cdpo is a 7 billion parameter language model trained for one epoch using the cDPO method on ultrafeedback_binarized. This model is designed for general language understanding and generation tasks, demonstrating initial performance on benchmarks like HellaSwag, ARC Challenge, Winogrande, and GSM8K. Its training methodology focuses on leveraging direct preference optimization for improved response quality.
Loading preview...
Model Overview
The chargoddard/loyal-piano-m7-cdpo is a 7 billion parameter language model developed by chargoddard. It was trained for a single epoch using the cDPO (Conditional Direct Preference Optimization) method on the ultrafeedback_binarized dataset. This training approach aims to align the model's outputs more closely with human preferences.
Key Capabilities & Performance
Initial benchmark results indicate its performance across various reasoning and common sense tasks:
- HellaSwag: Achieved an accuracy of 0.6621 (acc_norm: 0.8525), demonstrating its ability in common sense reasoning.
- ARC Challenge: Scored an accuracy of 0.6348 (acc_norm: 0.6698), indicating its capacity for scientific question answering.
- Winogrande: Reached an accuracy of 0.7861, showcasing its proficiency in resolving pronoun ambiguity.
- GSM8K: Recorded an accuracy of 0.5694 on grade school math problems, suggesting foundational mathematical reasoning skills.
Use Cases
This model is suitable for applications requiring general language understanding and generation, particularly where preference-aligned outputs are beneficial due to its cDPO training. Its 8192-token context length supports processing moderately long inputs for tasks like summarization, question answering, and content creation.
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.