lodrick-the-lafted/Hermes-Instruct-7B-217K
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Hermes-Instruct-7B-217K is a 7 billion parameter instruction-tuned causal language model developed by lodrick-the-lafted, based on Mistral-7B-Instruct-v0.2. It was fine-tuned using 217K rows of the OpenHermes dataset in Alpaca format, leveraging Mistral's native 32K context and 1M rope theta. This model is optimized for following instructions and generating responses in both Mistral-Instruct and Alpaca prompt formats.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p