DrNicefellow/Mistral-7-from-Mixtral-8x7B-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kLicense:apache-2.0Architecture:Transformer Open Weights Cold
DrNicefellow/Mistral-7-from-Mixtral-8x7B-v0.1 is a 7 billion parameter experimental language model derived from the mistralai/Mixtral-8x7B-v0.1 architecture. It is constructed by extracting the 7th expert from each Mixture of Experts (MoE) layer of the base Mixtral model. This model is designed for general language understanding and generation tasks, though its performance is expected to be lower than the original Mistral-7B due to its experimental extraction method.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p