W-61/llama-3-8b-base-beta-dpo-ultrafeedback-8xh200
W-61/llama-3-8b-base-beta-dpo-ultrafeedback-8xh200 is an 8 billion parameter language model fine-tuned by W-61. It is based on the Llama 3 architecture and further optimized using Direct Preference Optimization (DPO) on the HuggingFaceH4/ultrafeedback_binarized dataset. This model is designed for enhanced performance in conversational and instruction-following tasks, building upon its base as a Llama 3 variant. It features an 8192 token context length, making it suitable for applications requiring robust response generation and understanding nuanced prompts.
Loading preview...
Model Overview
W-61/llama-3-8b-base-beta-dpo-ultrafeedback-8xh200 is an 8 billion parameter language model developed by W-61. This model is a fine-tuned iteration of the W-61/llama-3-8b-base-sft-ultrachat-8xh200 base, specifically optimized using Direct Preference Optimization (DPO).
Key Characteristics
- Architecture: Based on the Llama 3 family, providing a strong foundation for general language understanding and generation.
- Fine-tuning: Utilizes Direct Preference Optimization (DPO) on the
HuggingFaceH4/ultrafeedback_binarizeddataset, which typically enhances alignment with human preferences and improves instruction-following capabilities. - Context Length: Supports an 8192-token context window, allowing for processing and generating longer sequences of text.
- Performance Metrics: During evaluation, the model achieved a loss of 0.7668 and a Beta Dpo/gap Mean of 15.9231, indicating its performance post-DPO training.
Training Details
The model was trained with a learning rate of 5e-07, a total batch size of 128 (across 8 GPUs with 2 gradient accumulation steps), and for 1 epoch. The optimizer used was ADAMW_TORCH with cosine learning rate scheduling and a 0.1 warmup ratio.
Intended Use Cases
This model is well-suited for applications requiring high-quality, preference-aligned text generation, such as advanced chatbots, content creation, and complex instruction following, benefiting from its DPO fine-tuning.