CharlesLi/llama_3_alpaca_helpful
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 31, 2024License:llama3.1Architecture:Transformer Cold

The CharlesLi/llama_3_alpaca_helpful model is an 8 billion parameter language model fine-tuned from Meta's Llama-3.1-8B-Instruct. This model was trained with a context length of 32768 tokens, achieving a final validation loss of 0.8488. It is intended for general helpful assistant tasks, building upon the Llama 3.1 architecture.

Loading preview...