CorticalStack/mistral-7b-openhermes-sft
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 17, 2024License:apache-2.0Architecture:Transformer Open Weights Warm
CorticalStack/mistral-7b-openhermes-sft is a 7 billion parameter language model, fine-tuned from unsloth/mistral-7b-bnb-4bit using the teknium/openhermes dataset. This model leverages an SFT (Supervised Fine-Tuning) approach, making it suitable for general conversational and instruction-following tasks. It was trained with a maximum sequence length of 2048 tokens, focusing on efficient performance with 4-bit quantization.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p