CharlesLi/llama_2_cot_simplest_alpaca_4_full
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 20, 2025License:llama2Architecture:Transformer Open Weights Cold
The CharlesLi/llama_2_cot_simplest_alpaca_4_full model is a 7 billion parameter language model, fine-tuned from Meta's Llama-2-7b-chat-hf. This model is specifically adapted for tasks related to the 'generator dataset', indicating a specialization in content generation or response formulation. It was trained with a learning rate of 2e-05 over one epoch, achieving a loss of 0.9264 on its evaluation set. Its primary application is likely within conversational AI or text generation systems where its fine-tuning provides targeted performance.
Loading preview...