HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8.5BQuant:FP8Ctx Length:8kPublished:Mar 1, 2024License:otherArchitecture:Transformer0.0K Cold

HuggingFaceH4/zephyr-7b-gemma-sft-v0.1 is an 8.5 billion parameter instruction-tuned causal language model, fine-tuned from Google's Gemma-7B. This model, with an 8192-token context length, is optimized for conversational AI and instruction following tasks through supervised fine-tuning on the HuggingFaceH4/deita-10k-v0-sft dataset. It demonstrates improved performance in generating coherent and contextually relevant responses based on given instructions.

Loading preview...