CorticalStack/mistral-7b-tak-stack-dpo
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 28, 2024License:apache-2.0Architecture:Transformer Open Weights Cold
CorticalStack/mistral-7b-tak-stack-dpo is a 7 billion parameter language model, fine-tuned from Mistral-7B-v0.1 using Direct Preference Optimization (DPO) on the CorticalStack/tak-stack-dpo dataset. This model leverages a context length of 8192 tokens and is specifically optimized for tasks benefiting from DPO-based alignment. Its fine-tuning process aims to enhance response quality and alignment with human preferences.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p