jackf857/llama-3-8b-base-margin-dpo-hh-harmless-batch-size-64

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 17, 2026Architecture:Transformer Cold

The jackf857/llama-3-8b-base-margin-dpo-hh-harmless-batch-size-64 model is an 8 billion parameter language model, fine-tuned from W-61/llama-3-8b-base-sft-hh-harmless-4xh200. It was trained using Direct Preference Optimization (DPO) on the Anthropic/hh-rlhf dataset, focusing on harmlessness. This model is optimized for generating responses that align with human preferences for safety and non-toxicity.

Loading preview...

Model Overview

This model, jackf857/llama-3-8b-base-margin-dpo-hh-harmless-batch-size-64, is an 8 billion parameter language model derived from W-61/llama-3-8b-base-sft-hh-harmless-4xh200. It has undergone further fine-tuning using a Direct Preference Optimization (DPO) approach.

Key Characteristics

  • Base Model: Fine-tuned from a Llama 3 8B base model.
  • Training Data: Utilizes the Anthropic/hh-rlhf dataset, which is designed to improve helpfulness and harmlessness through human feedback.
  • Optimization Method: Employs a margin-based Direct Preference Optimization (DPO) technique, aiming to align model outputs with human preferences for safety and non-toxicity.
  • Performance: Achieved a final validation loss of 0.5259, with a margin DPO mean of 9.3649 on the evaluation set, indicating its preference alignment.

Training Details

The model was trained for 1 epoch with a learning rate of 5e-07, a total batch size of 64, and an AdamW optimizer. The training involved 4 GPUs with a gradient accumulation of 2 steps.

Intended Use Cases

This model is particularly suited for applications where generating harmless and ethically aligned text is a priority. Its DPO fine-tuning on the hh-rlhf dataset makes it a strong candidate for tasks requiring safe and non-toxic conversational AI or content generation.