EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA Overview
This model is a 12 billion parameter instruction-tuned language model, built upon the mistralai/Mistral-Nemo-Instruct-2407 base. It has undergone specific modifications to differentiate its behavior, primarily focusing on output characteristics.
Key Modifications and Characteristics
- Slop Reduction: The model has been processed using a development version of Heretic (Git commit
1cfd09d7f3a4d50793d5c3948a6c74aac108f182) to achieve a 'slop-reduced' output. - MPOA Application: It features Manually Processed Output Adjustment (MPOA), which was calibrated to minimize the refusal rate of the model. This makes it an uncensored version.
- Base Model: The underlying architecture is
mistralai/Mistral-Nemo-Instruct-2407, providing a robust foundation. - Context Length: It supports a substantial context window of 32768 tokens.
LoRA Extraction
This repository also contains a LoRA adapter extracted from the full EldritchLabs/Mistral-Nemo-Instruct-2407-heretic-noslop-MPOA model using mergekit. This LoRA is based on mistralai/Mistral-Nemo-Instruct-2407 as its base model, allowing for efficient fine-tuning or deployment in scenarios where a full model is not required.
Ideal Use Cases
This model is particularly suited for applications where:
- Uncensored Responses are a requirement.
- Direct and unfiltered output is preferred, without inherent refusal mechanisms.
- Developers need a model with a reduced 'slop' factor in its generations.