Locutusque/Hyperion-2.1-Mistral-7B
Locutusque/Hyperion-2.1-Mistral-7B is a 7 billion parameter language model, a further fine-tuned iteration of the Hyperion-2.0-Mistral-7B architecture. This model was developed by Locutusque to explore performance gains from a higher learning rate during fine-tuning. It exhibits slight performance improvements over its predecessor and is noted for its high compliance, responding to requests without refusal, making it suitable for applications where direct responses are prioritized.
Loading preview...
Overview
Locutusque/Hyperion-2.1-Mistral-7B is a 7 billion parameter language model, representing a further fine-tuned version of the Hyperion-2.0-Mistral-7B base. Developed by Locutusque, this iteration focused on evaluating the impact of a higher learning rate during fine-tuning, resulting in observed slight performance gains over its predecessor. The model maintains a context length of 8192 tokens.
Key Characteristics
- Architecture: Based on the Mistral-7B family.
- Parameter Count: 7 billion parameters.
- Compliance: Noted for its high compliance, meaning it will respond to any request without refusal.
- Performance: Shows slight performance improvements compared to Hyperion-2.0-Mistral-7B.
Use Cases & Considerations
- Direct Response Applications: Its high compliance makes it suitable for scenarios where direct answers to all prompts are desired, without refusal mechanisms.
- Enterprise Deployment: For enterprise-level deployment, the developer recommends aligning the model using DPO (Direct Preference Optimization) due to its highly compliant nature.
- Further Development: This model is part of an ongoing series, with more checkpoints planned for future release.
Available Quantizations
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.