Seungyoun/llama-2-7b-alpaca-gpt4
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 10, 2024License:mitArchitecture:Transformer Open Weights Cold

Seungyoun/llama-2-7b-alpaca-gpt4 is a 7 billion parameter language model based on the LLaMA 2 architecture, fine-tuned using the Alpaca-GPT-4 dataset. This model specifically trained the response part of the LLaMA 2-7b base model with LoRA, merging the weights for improved performance. It is primarily designed for instruction-following tasks, leveraging the high-quality Alpaca-GPT-4 dataset to generate human-like responses.

Loading preview...