QuixiAI/TinyDolphin-2.8-1.1b

Warm
Public
1.1B
BF16
2048
Jan 21, 2024
License: apache-2.0
Hugging Face
Overview

TinyDolphin-2.8-1.1b: An Experimental Llama 2-based Model

TinyDolphin-2.8-1.1b is an experimental 1.1 billion parameter language model developed by Kearm, based on the Llama 2 architecture and tokenizer. It was trained on the new Dolphin 2.8 dataset by Eric Hartford, making it compatible with many open-source projects built upon Llama. The model maintains a compact size, making it suitable for applications with limited computational and memory resources.

Key Capabilities

  • Creative Text Generation: Demonstrated ability to invent scenarios, draft letters with specific tones (e.g., sarcastic), and construct descriptive poems.
  • Nuanced Responses: Capable of generating text that follows specific instructions regarding tone and content, as shown in example outputs.
  • Compact Footprint: Its 1.1B parameters allow for deployment in environments with restricted resources.

Good For

  • Experimental Applications: Ideal for developers exploring the capabilities of smaller, specialized language models.
  • Creative Writing Tasks: Can be used for generating imaginative stories, poems, or specific types of correspondence.
  • Resource-Constrained Environments: Suitable for integration into applications where memory and computational power are limited, but intelligent text generation is required.