HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
TEXT GENERATIONConcurrency Cost:1Model Size:12BQuant:FP8Ctx Length:32kPublished:Oct 6, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 is a fine-tuned version of mistralai/Mistral-Nemo-Instruct-2407, developed by HumanLLMs. This model is specifically optimized to generate more human-like and conversational responses, enhancing natural language understanding and emotional intelligence. It was fine-tuned using Low-Rank Adaptation (LoRA) and Direct Preference Optimization (DPO) on a synthetic dataset of approximately 11,000 samples across 256 diverse topics. The model excels in conversational coherence, making it suitable for applications requiring natural and empathetic interactions.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p