cmagganas/instruct-tuned-llama-7b-hf-alpaca_gpt4_5_000_samples
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

The cmagganas/instruct-tuned-llama-7b-hf-alpaca_gpt4_5_000_samples model is a 7 billion parameter LLaMA-2 based language model, fine-tuned by cmagganas for instruction following. It utilizes 4-bit quantization and Flash Attention for efficient processing. Optimized on a subset of the Alpaca-GPT-4 dataset, this model excels at generating coherent and contextually relevant responses to complex prompts, making it suitable for tasks like text completion and question answering.

Loading preview...