Model Overview
concedo/KobbleTinyV2-1.1B is a 1.1 billion parameter language model, representing an updated version (V2) of the original KobbleTiny. It is a fine-tune of the TinyLlama-1.1B-intermediate-step-1431k-3T base model, specifically trained on a small 50MB subset of the Kobble Dataset.
Key Characteristics
- Base Model: Fine-tuned from TinyLlama-1.1B-intermediate-step-1431k-3T.
- Training: Achieved in under 2 hours on an Nvidia RTX 2060 Mobile GPU using QLoRA, with a context length of 2048 tokens.
- Dataset Focus: The Kobble Dataset is an aggregated, semi-private collection of online sources and web scrapes, formatted for use with KoboldAI software and Kobold Lite.
- Content Emphasis: The dataset includes:
- Instruct: Single-turn Alpaca-formatted instructions, prioritizing uncensored and unrestricted responses.
- Chat: Multi-turn roleplay conversation logs.
- Story: Unstructured fiction excerpts, including erotic and provocative content.
Intended Use Cases
- KoboldAI Integration: Optimized for seamless operation with KoboldAI software and Kobold Lite.
- Unrestricted Content Generation: Suitable for applications requiring text generation without typical content filters or restrictions, particularly for roleplay, creative writing, and instruct-based tasks with an emphasis on uncensored output.
Note: Users are advised that no assurances are provided regarding the origins, safety, or copyright status of the model or its training data. It is not recommended for use in environments with strict AI laws or restrictions against unrestricted content.