CharlesLi/llama_2_sky_o1_5_full

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 13, 2025License:llama2Architecture:Transformer Open Weights Cold

The CharlesLi/llama_2_sky_o1_5_full is a 7 billion parameter language model, fine-tuned from Meta's Llama-2-7b-chat-hf. This model was fine-tuned on a generator dataset, achieving a validation loss of 0.5805. It is designed for generative tasks, building upon the conversational capabilities of its base Llama 2 architecture.

Loading preview...

Model Overview

CharlesLi/llama_2_sky_o1_5_full is a 7 billion parameter language model derived from the meta-llama/Llama-2-7b-chat-hf base model. It has been specifically fine-tuned on a generator dataset, indicating its primary intended use for text generation tasks. During its training, the model achieved a final validation loss of 0.5805.

Training Details

The model was trained using the following key hyperparameters:

  • Learning Rate: 2e-05
  • Batch Size: 4 (train), 4 (eval) with 2 gradient accumulation steps, resulting in a total train batch size of 32.
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08.
  • Scheduler: Cosine learning rate scheduler with a warmup ratio of 0.1.
  • Epochs: 1

The training utilized a multi-GPU setup with 4 devices. The training process showed a consistent decrease in validation loss, reaching 0.5809 by the end of the training steps.

Framework Versions

This model was developed using:

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1