Sao10K/L3-8B-Lunaris-v1
Sao10K/L3-8B-Lunaris-v1 is an 8 billion parameter generalist and roleplaying language model based on the Llama 3 architecture. This model is a merge of several Llama 3-based models, including Meta-Llama-3-8B-Instruct, L3-8B-sunfall-v0.1, Jamet-8B-L3-MK1, maldv/badger-iota-llama-3-8b, and Sao10K/Stheno-3.2-Beta. It is specifically designed to balance creativity and logic, making it suitable for both general conversational tasks and immersive roleplaying scenarios.
Loading preview...
Model Overview
Sao10K/L3-8B-Lunaris-v1 is an 8 billion parameter language model built upon the Llama 3 architecture, developed by Sao10K. It is a merged model, combining several specialized Llama 3 variants to achieve a balanced performance profile. The creator notes that this merge aims to improve upon previous iterations like Stheno v3.2 by enhancing both creative output and logical coherence.
Key Capabilities
- Generalist Performance: Benefits from the inclusion of models like
maldv/badger-iota-llama-3-8bwhich contribute to general knowledge and reasoning. - Enhanced Roleplaying: Integrates models such as
crestf411/L3-8B-sunfall-v0.1andHastagaras/Jamet-8B-L3-MK1, which are specifically trained for roleplaying and storytelling. - Balanced Output: Designed to offer a good balance between creative generation and logical consistency, addressing common challenges in merged models.
- Llama-3-Instruct Compatibility: Utilizes the Llama-3-Instruct context template, ensuring compatibility with common instruction-following setups.
Merge Strategy
The model was created using the ties merge method, with meta-llama/Meta-Llama-3-8B-Instruct as the base model. The merging process involved careful selection and weighting of component models based on extensive personal experimentation, aiming to combine diverse datasets and strengths. Specific parameters for density and weight were applied to each contributing model to fine-tune the merge outcome.
Recommended Settings
For optimal performance, the developer recommends using the following inference settings:
- Context Template: Llama-3-Instruct
- Temperature: 1.4
- min_p: 0.1
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.