jackf857/qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4
The jackf857/qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4 model is an 8 billion parameter language model, fine-tuned from jackf857/qwen3-8b-base-sft-hh-harmless-4xh200-batch-64-20260417-214452 using Direct Preference Optimization (DPO) on the Anthropic/hh-rlhf dataset. This model specializes in generating harmless and helpful responses, leveraging a 32768-token context length. It is optimized for applications requiring safe and aligned conversational AI outputs.
Loading preview...
Model Overview
This model, jackf857/qwen3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4, is an 8 billion parameter language model. It is a fine-tuned variant of jackf857/qwen3-8b-base-sft-hh-harmless-4xh200-batch-64-20260417-214452, specifically enhanced through Direct Preference Optimization (DPO).
Key Capabilities
- Harmless Response Generation: The model has been fine-tuned on the Anthropic/hh-rlhf dataset, which is designed to improve the harmlessness and helpfulness of AI outputs.
- Preference Alignment: Utilizes DPO training to align model behavior with human preferences for safety and non-toxicity.
- Large Context Window: Supports a context length of 32768 tokens, enabling it to process and generate longer, more coherent responses.
Training Details
The model underwent a single epoch of training with a learning rate of 5e-07 and a total batch size of 64 across 4 GPUs. The training process focused on minimizing loss and optimizing DPO-specific metrics, achieving a final loss of 0.5698 and a margin DPO mean of 51.5411. The training utilized Transformers 4.51.0 and Pytorch 2.3.1+cu121.