tinyflame1572/shanebot is a 3.1 billion parameter instruction-tuned causal language model, finetuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit. Developed by tinyflame1572, this Qwen2-based model was trained using Unsloth and Huggingface's TRL library for accelerated finetuning. It offers a 32768 token context length, making it suitable for general text generation tasks.
Loading preview...
Model Overview
tinyflame1572/shanebot is a 3.1 billion parameter instruction-tuned language model, developed by tinyflame1572. It is based on the Qwen2 architecture and was finetuned from unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit.
Key Characteristics
- Architecture: Qwen2-based, finetuned from a 3.1 billion parameter base model.
- Training Efficiency: This model was finetuned using Unsloth and Huggingface's TRL library, which enabled a 2x faster training process.
- Context Length: Supports a context window of 32768 tokens.
- License: Released under the Apache-2.0 license.
Intended Use Cases
This model is suitable for a variety of general text generation and instruction-following tasks, leveraging its Qwen2 foundation and efficient finetuning. Its 3.1 billion parameters make it a compact yet capable option for applications where resource efficiency is important.