mlabonne/AlphaMonarch-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Feb 14, 2024License:cc-by-nc-4.0Architecture:Transformer0.1K Open Weights Warm

mlabonne/AlphaMonarch-7B is a 7 billion parameter DPO fine-tuned language model developed by mlabonne, based on a merge of several models including NeuralMonarch-7B. It features an 8k context window and is optimized to retain strong reasoning abilities while significantly improving conversational capabilities. This model excels in instruction following, reasoning, and conversational tasks, making it suitable for general-purpose chat, roleplay, and storytelling applications.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p