CharlesLi/llama_2_sky_safe_o1_4o_reflect_4000_500_full
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 13, 2025License:llama2Architecture:Transformer Open Weights Cold
CharlesLi/llama_2_sky_safe_o1_4o_reflect_4000_500_full is a 7 billion parameter language model fine-tuned from Meta's Llama-2-7b-chat-hf. This model was trained with a focus on achieving a low loss of 0.6698 on its evaluation set, indicating a specialized fine-tuning process. While specific intended uses and limitations require further information, its Llama 2 base suggests general conversational and text generation capabilities. The fine-tuning process involved a learning rate of 2e-05 and a cosine scheduler over one epoch.
Loading preview...