tiyupi-ece/TUP-Manila-ECE-Bot
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Nov 25, 2025License:apache-2.0Architecture:Transformer Open Weights Cold
The tiyupi-ece/TUP-Manila-ECE-Bot is a 7.6 billion parameter Qwen2-based causal language model, fine-tuned by tiyupi-ece. Utilizing Unsloth and Huggingface's TRL library, this model was trained for enhanced efficiency. It is designed for general language tasks, leveraging its Qwen2 architecture and efficient fine-tuning process.
Loading preview...
Overview
The tiyupi-ece/TUP-Manila-ECE-Bot is a 7.6 billion parameter language model, fine-tuned by tiyupi-ece. It is based on the Qwen2 architecture and was specifically optimized for training speed using Unsloth and Huggingface's TRL library, achieving a 2x faster training time compared to standard methods. The model is licensed under Apache-2.0.
Key Characteristics
- Base Model: Fine-tuned from
unsloth/qwen2.5-7b-unsloth-bnb-4bit, indicating a foundation in the Qwen2.5 series. - Efficient Training: Leverages Unsloth for accelerated fine-tuning, making it a good candidate for applications requiring rapid iteration or deployment.
- Parameter Count: Features 7.6 billion parameters, offering a balance between capability and computational requirements.
- Context Length: Supports a context length of 32768 tokens, suitable for processing moderately long inputs.
Good For
- Developers looking for a Qwen2-based model that has undergone efficient fine-tuning.
- Applications where the benefits of Unsloth's accelerated training process are desirable.
- General language generation and understanding tasks within its parameter and context limits.