CharlesLi/llama_2_rlhf_safe_llama_3_70B_reflect_500_full

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 13, 2025License:llama2Architecture:Transformer Open Weights Cold

The CharlesLi/llama_2_rlhf_safe_llama_3_70B_reflect_500_full model is a 7 billion parameter language model, fine-tuned from Meta's Llama-2-7b-chat-hf. This model was trained on a generator dataset, achieving a loss of 0.7526 on its evaluation set. It is designed for general language generation tasks, building upon the Llama 2 architecture.

Loading preview...

Model Overview

This model, llama_2_rlhf_safe_llama_3_70B_reflect_500_full, is a 7 billion parameter language model derived from Meta's Llama-2-7b-chat-hf. It has been fine-tuned specifically on a generator dataset, indicating an optimization for text generation tasks.

Key Training Details

During its single epoch of training, the model utilized the following hyperparameters:

  • Learning Rate: 2e-05
  • Batch Size: 4 (train), 4 (eval)
  • Total Batch Size: 32 (train), 16 (eval) with gradient accumulation
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
  • Scheduler: Cosine with 0.1 warmup ratio

Performance

The model achieved a loss of 0.7526 on its evaluation set, demonstrating its learning efficacy during the fine-tuning process.

Framework Versions

The training environment included:

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1

Potential Use Cases

Given its fine-tuning on a generator dataset, this model is likely suitable for various text generation applications, including but not limited to:

  • Content creation
  • Dialogue generation
  • Creative writing assistance