alignment-handbook/zephyr-7b-sft-full
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 9, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The alignment-handbook/zephyr-7b-sft-full is a 7 billion parameter language model fine-tuned from Mistral-7B-v0.1. Developed by alignment-handbook, this model is specifically trained on the HuggingFaceH4/ultrachat_200k dataset. It is optimized for supervised fine-tuning tasks, demonstrating a validation loss of 0.9353. This model is suitable for applications requiring a robust base model with enhanced conversational capabilities.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p