W-61/qwen3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.43-s_star-0.3-20260430-192039

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 30, 2026Architecture:Transformer Cold

W-61/qwen3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.43-s_star-0.3-20260430-192039 is an 8 billion parameter Qwen3-based language model, fine-tuned using Direct Preference Optimization (DPO) on the HuggingFaceH4/ultrafeedback_binarized dataset. This model is optimized for generating responses aligned with human preferences, demonstrating improved performance in areas like loss and log-probabilities for chosen versus rejected outputs. It is suitable for applications requiring high-quality, preference-aligned text generation.

Loading preview...

Model Overview

This model, W-61/qwen3-8b-base-new-dpo-ultrafeedback-4xh200-batch-128-q_t-0.43-s_star-0.3-20260430-192039, is an 8 billion parameter language model built upon a Qwen3 base architecture. It has been fine-tuned using Direct Preference Optimization (DPO) on the HuggingFaceH4/ultrafeedback_binarized dataset, which is designed to align model outputs with human preferences.

Key Characteristics

  • Base Model: Fine-tuned from jackf857/qwen3-8b-base-sft-ultrachat-4xh200-batch-128.
  • Optimization Method: Utilizes Direct Preference Optimization (DPO) for preference alignment.
  • Training Data: Trained on the HuggingFaceH4/ultrafeedback_binarized dataset.
  • Performance Metrics: Achieved a final validation loss of 0.6224, with notable improvements in log-probabilities for chosen responses (-335.0146) compared to rejected responses (-375.6061).
  • Context Length: Supports a context length of 32768 tokens.

Training Details

The model was trained with a learning rate of 5e-07, a total training batch size of 128, and for 1 epoch. It used the AdamW optimizer with cosine learning rate scheduling and a warmup ratio of 0.1. The training process involved 4 GPUs and a gradient accumulation of 8 steps.

Use Cases

This model is particularly well-suited for applications where generating text that aligns with human preferences and quality standards is crucial. Its DPO fine-tuning makes it effective for tasks requiring nuanced and preferred responses, such as advanced chatbots, content generation, and summarization where user satisfaction is a priority.