Overview
Sekhmet_Bet-L3.1-8B-v0.2 Overview
Sekhmet_Bet-L3.1-8B-v0.2 is an 8 billion parameter language model developed by ChaoticNeutrals, designed to offer robust solutions and insightful guidance. This iteration is fine-tuned on a combination of private Hathor_0.85 instructions, a small dataset of creative writing, and roleplaying chat pairs, building upon the Sekhmet_Aleph-L3.1-8B-v0.1 base.
Key Characteristics
- Architecture: Based on the Llama 3.1 family.
- Parameter Count: 8 billion parameters.
- Context Length: Features an extended context window of 32768 tokens.
- Training Focus: Primarily trained on creative writing and roleplaying data, alongside instructional datasets.
- Censorship: Aims to provide a relatively uncensored alternative to standard Llama 3.1 Instruct models.
Intended Use Cases
This model is particularly suited for applications requiring:
- Creative Writing: Generating imaginative text, stories, and descriptive content.
- Roleplaying: Engaging in interactive, character-driven conversational scenarios.
- Extended Context Tasks: Handling prompts and responses that require a large memory of previous interactions or extensive input.
Important Considerations
It's noted that this model was assembled quickly with a low learning rate, and therefore, its performance may not match that of Hathor versions 0.5, 0.85, or 1.0. Users seeking GGUF or EXL2 quantizations can find them via links provided by Reiterate3680, Bartowski, and Nitral-AI.