h2m/mhm-7b-v1.3-DPO-1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 17, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
h2m/mhm-7b-v1.3-DPO-1 is a 7 billion parameter language model developed by h2m, fine-tuned using DPO on the Intel/orca_dpo_pairs dataset. Based on the Mistral architecture, this model is the result of multiple merges involving seven different models from the openllm leaderboard. It offers an 8192 token context length and is primarily an experimental model for general language tasks.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p