W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-0.3

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 28, 2026Architecture:Transformer Cold

W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-0.3 is an 8 billion parameter language model fine-tuned by W-61. It is based on a Llama-3-8B-base variant and has been further optimized using Direct Preference Optimization (DPO) on the Anthropic/hh-rlhf dataset. This model is specifically designed to generate harmless and helpful responses, making it suitable for applications requiring safe and aligned AI interactions.

Loading preview...

Overview

This model, W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-0.3, is an 8 billion parameter language model developed by W-61. It is a fine-tuned iteration of W-61/llama-3-8b-base-sft-hh-harmless-4xh200, specifically enhanced through Direct Preference Optimization (DPO).

Key Characteristics

  • Base Model: Derived from a Llama-3-8B-base architecture.
  • Fine-tuning: Utilizes Direct Preference Optimization (DPO) for alignment.
  • Dataset: Fine-tuned on the Anthropic/hh-rlhf dataset, which focuses on helpful and harmless AI responses.
  • Context Length: Supports an 8192-token context window.

Training Details

The model was trained with the following hyperparameters:

  • Learning Rate: 5e-07
  • Batch Size: 8 (train), 8 (eval)
  • Gradient Accumulation Steps: 2
  • Total Train Batch Size: 64
  • Optimizer: ADAMW_TORCH
  • LR Scheduler: Cosine with 0.1 warmup ratio
  • Epochs: 1

Intended Use

This model is primarily intended for applications where generating harmless and helpful text is crucial, leveraging its DPO fine-tuning on the Anthropic/hh-rlhf dataset to ensure aligned outputs.