922-Narra/llama-2-7b-chat-cebuano-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:otherArchitecture:Transformer0.0K Cold
922-Narra/llama-2-7b-chat-cebuano-v0.1 is an experimental 7 billion parameter Llama 2 Chat model fine-tuned for the Cebuano language. With a 4096-token context length, this model aims to explore the effects of fine-tuning on a limited Cebuano chat dataset. It is designed for research into Cebuano language generation, though it may still produce mixed-language or nonsensical outputs.
Loading preview...
Model Overview
922-Narra/llama-2-7b-chat-cebuano-v0.1 is an experimental language model based on the Llama 2 Chat architecture, featuring 7 billion parameters and a 4096-token context window. Developed by 922-Narra, this model represents an initial fine-tuning effort specifically for the Cebuano language.
Key Characteristics
- Cebuano Fine-tuning: The model has undergone fine-tuning on an approximately 10,000-line dataset of roughly formatted Cebuano chat data for one epoch.
- Experimental Nature: This is an experimental release, and outputs are not guaranteed to be safe or accurate. It is explicitly noted as not suitable for production use.
- Multilingual Output Potential: Due to its experimental status and limited fine-tuning, the model may still generate responses in English, Tagalog, Taglish, or produce gibberish.
- Llama 2 Base: Built upon the Llama 2 7B Chat model, leveraging its foundational capabilities.
Intended Use Cases
This model is primarily intended for:
- Research and Development: Exploring the effectiveness of fine-tuning large language models on low-resource languages like Cebuano.
- Linguistic Analysis: Investigating the impact of specific Cebuano datasets on model behavior and language generation.
- Early-stage Prototyping: For developers interested in experimenting with Cebuano language models, understanding their current limitations and potential.