philschmid/llama-2-7b-instruction-generator
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jul 25, 2023License:openrailArchitecture:Transformer0.0K Open Weights Cold

The philschmid/llama-2-7b-instruction-generator is a 7 billion parameter Llama 2-based causal language model, fine-tuned by philschmid with a context length of 4096 tokens. This model specializes in generating instructions from given input texts, utilizing the Alpaca format and a modified Dolly dataset. Its primary purpose is to synthetically create instruction data from unsupervised text, enabling personalization of large language models.

Loading preview...