Weyaxi/HelpSteer-filtered-neural-chat-7b-v3-1-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Nov 28, 2023License:apache-2.0Architecture:Transformer Open Weights Cold

Weyaxi/HelpSteer-filtered-neural-chat-7b-v3-1-7B is a 7 billion parameter language model, created by Weyaxi, that merges Intel/neural-chat-7b-v3-1 with Weyaxi/HelpSteer-filtered-7B-Lora. This model is designed for general conversational AI tasks, leveraging its merged architecture to enhance response quality. It supports a context length of 8192 tokens, making it suitable for applications requiring moderate conversational depth.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p