Olethros-8B: An Expressive 8B Instruction-Tuned Model
Olethros-8B is an instruction-tuned large language model developed by lodrick-the-lafted, built upon the L3-8b architecture. This model distinguishes itself through its unique fine-tuning process, which involved training on approximately 6000 generations from Opus. The primary goal of this extensive fine-tuning was to imbue the model with a distinct "sovl" or expressive quality, making its outputs more engaging and nuanced.
Key Characteristics
- Base Model: L3-8b architecture.
- Parameter Count: 8 billion parameters.
- Context Length: Supports an 8192-token context window.
- Fine-tuning: Instruction-tuned with a focus on enhancing expressive generation through Opus data.
Available Quantizations
Olethros-8B is readily available in various quantized formats to facilitate broader accessibility and deployment across different hardware configurations. These include:
- GGUF: Static GGUF quantizations provided by mradermacher.
- AWQ: Quantizations available directly from lodrick.
- Exl2: A wide range of exl2 quantizations, from 2.25bpw to 6.0bpw, provided by blockblockblock, offering flexibility for performance and memory trade-offs.
Potential Use Cases
Given its specialized fine-tuning, Olethros-8B is particularly well-suited for applications requiring:
- Creative Content Generation: Crafting engaging narratives, dialogues, or descriptive text.
- Role-playing Scenarios: Generating responses with character and depth.
- Instruction Following: Producing high-quality outputs based on specific instructions, with an added layer of expressiveness.