Model Overview
ibivibiv/bubo-bubo-13b is a 13 billion parameter auto-regressive language model built upon the Llama 2 transformer architecture. Developed by ibivibiv, this model is primarily focused on English language tasks and is distinguished by its specialized training for summarization.
Key Capabilities
- Specialized Summarization: The model has undergone specific training to perform summary tasks, with a particular emphasis on summarizing communication chains.
- Llama 2 Architecture: Leverages the robust Llama 2 transformer architecture for its underlying language generation capabilities.
- Alpaca-style Prompting: Utilizes a straightforward
### Instruction: <prompt> ### Response: format for user interaction, making it easy to integrate into existing workflows.
Intended Use Cases
- Communication Chain Summarization: Ideal for condensing email threads, chat logs, or other sequential communication into concise summaries.
- General Summarization Tasks: While specialized, it can also be applied to other summarization needs where a focused, efficient model is beneficial.
Performance Notes
Benchmark scores are provided, though the developer notes that the model's primary purpose is summarization, and general benchmarks may not fully reflect its specialized strength. For instance, it achieves an overall accuracy of 0.579 on the benchmark harness, with varying scores across different MMLU subtasks.