CharlesLi/llama_2_sky_safe_o1_4o_default_4000_1000_full

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 13, 2025License:llama2Architecture:Transformer Open Weights Cold

The CharlesLi/llama_2_sky_safe_o1_4o_default_4000_1000_full model is a 7 billion parameter Llama-2-7b-chat-hf variant fine-tuned by CharlesLi. This model is based on the Llama 2 architecture and has a context length of 4096 tokens. It was fine-tuned on an unspecified dataset, achieving a validation loss of 0.5409. Its specific differentiators and primary use cases are not detailed in the provided information.

Loading preview...

Model Overview

This model, llama_2_sky_safe_o1_4o_default_4000_1000_full, is a fine-tuned version of Meta's Llama-2-7b-chat-hf model. Developed by CharlesLi, it leverages the 7 billion parameter Llama 2 architecture.

Training Details

The model was trained using the following hyperparameters:

  • Learning Rate: 2e-05
  • Batch Size: 4 (train), 4 (eval)
  • Gradient Accumulation Steps: 2
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
  • LR Scheduler: Cosine with 0.1 warmup ratio
  • Epochs: 1

During training, it achieved a validation loss of 0.5409. The training process utilized a multi-GPU setup with 4 devices, resulting in a total train batch size of 32 and a total eval batch size of 16.

Limitations

The provided information does not specify the dataset used for fine-tuning, nor does it detail the intended uses, specific capabilities, or known limitations of this particular fine-tuned version. Further information is needed to understand its unique strengths or ideal applications.