pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kPublished:Jan 10, 2024License:cc-by-nc-4.0Architecture:Transformer Open Weights Warm

pinkyponky/SOLAR-10.7B-dpo-instruct-tuned-v0.1 is a 10.7 billion parameter instruction-tuned causal language model based on the Upstage SOLAR-10.7B-v1.0 architecture, fine-tuned using Direct Preference Optimization (DPO). This model is designed for general text generation tasks, leveraging its DPO fine-tuning to produce high-quality, instruction-following outputs. It offers a 4096-token context window, making it suitable for various conversational and generative AI applications.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p