W-61/llama-3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.43-s_star-0.4-20260429-230725
W-61/llama-3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.43-s_star-0.4-20260429-230725 is an 8 billion parameter language model developed by W-61, fine-tuned from a Llama 3 base model. This model has undergone further DPO (Direct Preference Optimization) training on the HuggingFaceH4/ultrafeedback_binarized dataset, enhancing its ability to align with human preferences and generate more helpful and harmless responses. With an 8192-token context length, it is optimized for conversational AI and instruction-following tasks where nuanced response generation is critical.
Loading preview...
Overview
This model, developed by W-61, is an 8 billion parameter language model based on the Llama 3 architecture. It is a fine-tuned version of W-61/llama-3-8b-base-sft-ultrachat-8xh200, specifically optimized using Direct Preference Optimization (DPO).
Key Capabilities
- Preference Alignment: Fine-tuned on the HuggingFaceH4/ultrafeedback_binarized dataset, indicating an emphasis on generating responses that align with human preferences.
- Instruction Following: The DPO training process typically enhances a model's ability to follow complex instructions and produce desired output formats.
- Context Handling: Supports an 8192-token context length, allowing for processing and generating longer, more coherent interactions.
Training Details
The model was trained with a learning rate of 5e-07 over 1 epoch, utilizing a total batch size of 128 across 4 GPUs. The training involved an AdamW optimizer and a cosine learning rate scheduler with a 0.1 warmup ratio. Evaluation metrics show a validation loss of 0.6016 and specific DPO-related metrics like a margin mean of 110.8150, suggesting effective preference learning.
Intended Uses
This model is suitable for applications requiring robust conversational AI, advanced instruction following, and generating human-aligned text, particularly in scenarios where feedback-driven optimization is beneficial.