CharlesLi/llama_2_cot_simplest_code_math_1_full

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2025License:llama2Architecture:Transformer Open Weights Cold

The CharlesLi/llama_2_cot_simplest_code_math_1_full model is a 7 billion parameter Llama-2-chat-hf variant, fine-tuned by CharlesLi. This model is based on the Llama 2 architecture and was trained for 1 epoch with a learning rate of 2e-05. It achieved a loss of 0.7902 on the evaluation set, indicating its performance on the specific generator dataset it was fine-tuned on.

Loading preview...

Model Overview

This model, llama_2_cot_simplest_code_math_1_full, is a fine-tuned version of the meta-llama/Llama-2-7b-chat-hf base model, developed by CharlesLi. It leverages the Llama 2 architecture with 7 billion parameters.

Training Details

The model was fine-tuned using a specific generator dataset. Key training hyperparameters included:

  • Learning Rate: 2e-05
  • Batch Size: 4 (train), 4 (eval)
  • Epochs: 1
  • Optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
  • LR Scheduler: Cosine type with a warmup ratio of 0.1

During evaluation, the model achieved a loss of 0.7902.

Framework Versions

The training utilized:

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1

Intended Uses & Limitations

Specific intended uses and limitations are not detailed in the provided information. Users should consider the model's fine-tuning on a 'generator dataset' when assessing its applicability for their specific tasks.