W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-5
W-61/llama-3-8b-base-new-dpo-hh-harmless-4xh200-batch-64-q_t-0.45-s_star-0.4-eta-5 is an 8 billion parameter language model, fine-tuned by W-61, based on the Llama 3 architecture. This model is specifically optimized for harmlessness through DPO training on the Anthropic/hh-rlhf dataset. With an 8192 token context length, it is designed for applications requiring safe and harmless text generation.
Loading preview...
Model Overview
This model, developed by W-61, is an 8 billion parameter Llama 3-based language model. It is a fine-tuned iteration of W-61/llama-3-8b-base-sft-hh-harmless-4xh200, specifically optimized for generating harmless content.
Key Characteristics
- Base Model: Llama 3 8B parameters.
- Fine-tuning: Utilizes Direct Preference Optimization (DPO) on the Anthropic/hh-rlhf dataset.
- Context Length: Supports an 8192 token context window.
- Training Configuration: Trained with a learning rate of 5e-07, a total batch size of 64, and a cosine learning rate scheduler over 1 epoch.
Intended Use Cases
This model is particularly suited for applications where the primary requirement is to generate text that adheres to harmlessness guidelines, making it suitable for:
- Content moderation systems.
- Chatbots requiring safe and non-toxic responses.
- Applications focused on ethical AI interactions.