yeonghwan123/Llama3-alpaca-tuned-and-merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Feb 12, 2026Architecture:Transformer Cold

The yeonghwan123/Llama3-alpaca-tuned-and-merged model is an 8 billion parameter language model, fine-tuned from the Llama 3 architecture. This model has been instruction-tuned using an Alpaca dataset, enhancing its ability to follow instructions and perform general-purpose conversational tasks. With a context length of 8192 tokens, it is designed for versatile applications requiring robust instruction-following capabilities.

Loading preview...