jackf857/qwen3-8b-base-orpo-ultrafeedback-4xh200-batch-128

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 29, 2026Architecture:Transformer Cold

The jackf857/qwen3-8b-base-orpo-ultrafeedback-4xh200-batch-128 is an 8 billion parameter Qwen3-based language model, fine-tuned using the ORPO method on the HuggingFaceH4/ultrafeedback_binarized dataset. This model is an instruction-tuned variant, building upon a previously supervised fine-tuned base model. It is optimized for generating responses aligned with human preferences, as indicated by its training on preference data.

Loading preview...

Model Overview

This model, jackf857/qwen3-8b-base-orpo-ultrafeedback-4xh200-batch-128, is an 8 billion parameter language model based on the Qwen3 architecture. It represents a further fine-tuned version of jackf857/qwen3-8b-base-sft-ultrachat-4xh200-batch-128.

Key Capabilities

  • Preference Alignment: Fine-tuned using the ORPO (Odds Ratio Preference Optimization) method on the HuggingFaceH4/ultrafeedback_binarized dataset, indicating an optimization for generating responses that align with human preferences.
  • Instruction Following: As an instruction-tuned model, it is designed to follow user instructions effectively, building on its supervised fine-tuned predecessor.

Training Details

The model was trained with a learning rate of 5e-07 over 1 epoch, utilizing a total batch size of 128 across 4 GPUs. Evaluation metrics show a rewards accuracy of 0.6060, suggesting its ability to differentiate between preferred and rejected responses. The training process used Transformers 4.51.0 and Pytorch 2.3.1+cu121.