TinyAiroboros-2.2.1 by aloobun is a fine-tuned 1.1 billion parameter language model based on PY007/TinyLlama-1.1B-Chat-v0.3. It was fine-tuned on 15,000 rows of the Airoboros-2.2.1 dataset, aiming to enhance its conversational and instruction-following capabilities. This model demonstrates moderate performance across various benchmarks, including ARC Challenge (0.2671 acc) and PIQA (0.7057 acc), making it suitable for general-purpose text generation and chat applications where a smaller model size is advantageous.
Loading preview...
Overview
aloobun/TinyAiroboros-2.2.1 is a compact 1.1 billion parameter language model, building upon the PY007/TinyLlama-1.1B-Chat-v0.3 architecture. Its development involved fine-tuning on a substantial 15,000-row subset of the Airoboros-2.2.1 dataset, a process designed to imbue it with improved instruction-following and conversational abilities.
Key Capabilities & Performance
This model offers general text generation and chat functionalities. Its performance on standard benchmarks indicates a balanced capability for its size:
- ARC Challenge: Achieves an accuracy of 0.2671.
- ARC Easy: Shows better performance with an accuracy of 0.5673.
- BoolQ: Records an accuracy of 0.6040.
- HellaSwag: Demonstrates an accuracy of 0.4155.
- PIQA: Performs relatively well with an accuracy of 0.7057.
Use Cases
Given its compact size and fine-tuning on a conversational dataset, TinyAiroboros-2.2.1 is particularly well-suited for:
- Resource-constrained environments: Ideal for applications where computational resources are limited.
- General-purpose chat applications: Can be used for basic conversational agents.
- Instruction-following tasks: Capable of generating text based on user prompts and instructions.
- Rapid prototyping: Its smaller size allows for quicker experimentation and deployment.