rwitz2/go-bruins-v2.1.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Dec 14, 2023License:ccArchitecture:Transformer0.0K Cold
The rwitz2/go-bruins-v2.1.1 is a 7 billion parameter language model, DPO-trained on the Intel/orca_dpo_pairs dataset, derived from jan-hq/trinity-v1. This model was noted as a top performer on leaderboards at its release, distinguishing itself through its specific fine-tuning approach. It is designed for general language tasks, leveraging its DPO training for improved instruction following and response quality.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p