W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-0.05

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 28, 2026Architecture:Transformer Cold

W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-0.05 is an 8 billion parameter Llama 3 base model fine-tuned by W-61. This model has been specifically fine-tuned using Direct Preference Optimization (DPO) on the Anthropic/hh-rlhf dataset, aiming to enhance harmlessness and align with human preferences. It is designed for applications requiring a robust and safety-aligned language model, particularly in conversational AI where harmless responses are critical.

Loading preview...

Model Overview

This model, W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-0.05, is an 8 billion parameter variant of the Llama 3 base architecture. It has undergone a specific fine-tuning process to improve its safety and alignment.

Key Capabilities

  • Harmlessness Alignment: The model was fine-tuned using Direct Preference Optimization (DPO) on the Anthropic/hh-rlhf dataset, which is known for its focus on human feedback for helpfulness and harmlessness. This training aims to reduce the generation of harmful or undesirable content.
  • Base Model Enhancement: It builds upon W-61/llama-3-8b-base-sft-hh-harmless-4xh200, suggesting an iterative improvement in safety and performance characteristics.

Training Details

The fine-tuning process involved:

  • Learning Rate: 5e-07
  • Batch Size: A total training batch size of 64 (train_batch_size: 8, gradient_accumulation_steps: 2, num_devices: 4).
  • Optimizer: ADAMW_TORCH with default betas and epsilon.
  • Scheduler: Cosine learning rate scheduler with a 0.1 warmup ratio.
  • Epochs: Trained for 1 epoch.

Good For

  • Applications requiring a language model with enhanced safety and reduced harmful outputs.
  • Use cases where alignment with human preferences for harmlessness is a priority, such as chatbots or content moderation tools.