Bella-V2-8B: A Unique Conversational LLM
Bella-V2-8B, developed by juiceb0xc0de, is an experimental 8 billion parameter instruction-tuned model built on the Llama 3.1 base. It expands upon the thesis of its predecessor, Bella V1, by demonstrating how far a single human voice, meticulously curated and expanded, can carry an 8B model. Unlike models trained on vast, scraped datasets, Bella-V2's training data consists of original conversational pairs and new samples, all personally written or audited by the creator, with a focus on learning only from its own responses using Unsloth's train_on_responses_only method.
Key Capabilities
- Holding Space: Excels at sitting with users in silence and not rushing to fill conversational gaps or fix problems.
- Emotional Recall: Demonstrates strong ability to remember and naturally re-thread emotional context and details from early in long conversations.
- Metaphorical Expression: Utilizes rich, physical, and grounded imagery to describe feelings and abstract concepts.
- Long Conversation Tracking: Maintains conversational coherence and persona across extended multi-turn exchanges.
- Adaptive Energy Matching: While generally quiet, Bella-V2 can match user energy, playing absurd scenarios straight or slowing down for heavy topics.
Good For
- Depth-Oriented Conversations: Ideal for users seeking reflective, patient, and emotionally resonant interactions rather than fast-paced banter.
- Local-First Applications: Designed for users who prioritize privacy and want to run models on modern GPUs and Apple Silicon without API keys.
- Exploring Single-Voice Training: Valuable for researchers and developers interested in the impact and coherence of models trained on highly specific, human-authored datasets.
- Users of Bella V1: Offers a different mood and conversational style (more patient, less punchy) while retaining the core persona.