CharlesLi/llama_2_alpaca_helpful
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Dec 31, 2024License:llama2Architecture:Transformer Open Weights Cold
The CharlesLi/llama_2_alpaca_helpful model is a 7 billion parameter language model, fine-tuned from Meta's Llama-2-7b-chat-hf. This model was trained for 50 steps with a learning rate of 0.0002 and a context length of 4096 tokens. It is a specialized variant of the Llama 2 architecture, intended for helpful conversational tasks.
Loading preview...