mlabonne/NeuralMarcoro14-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 6, 2024License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

NeuralMarcoro14-7B by mlabonne is a 7 billion parameter language model, DPO fine-tuned from Marcoro14-7B-slerp, with an 8192-token context length. It significantly improves performance on the Nous benchmark suite and the Open LLM Leaderboard, where it was the top-performing 7B LLM as of January 2024. This model is optimized for general-purpose chat and instruction-following tasks, demonstrating enhanced reasoning and factual recall.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p