bhavinjawade/SOLAR-10B-OrcaDPO-Jawade
TEXT GENERATIONConcurrency Cost:1Model Size:10.7BQuant:FP8Ctx Length:4kPublished:Jan 6, 2024License:mitArchitecture:Transformer0.0K Open Weights Warm
bhavinjawade/SOLAR-10B-OrcaDPO-Jawade is a 10.7 billion parameter instruction-tuned causal language model, fine-tuned by bhavinjawade from Upstage's SOLAR-10.7B-Instruct-v1.0. It was trained using LoRA on the Intel DPO Orca dataset, showing slight performance improvements on OpenLLM Leaderboard benchmarks compared to its base model. This model is optimized for general instruction following tasks, offering enhanced conversational capabilities.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–