FelixChao/NarutoDolphin-10B
NarutoDolphin-10B by FelixChao is a 10.7 billion parameter language model, created by merging FelixChao/WizardDolphin-7B and FelixChao/NinjaDolphin-7B. This model is designed for general language generation tasks, leveraging the combined strengths of its constituent models. It offers a 4096-token context length, making it suitable for applications requiring moderate input and output lengths. Quantized versions are also available for optimized deployment.
Loading preview...
NarutoDolphin-10B: A Merged Language Model
NarutoDolphin-10B is a 10.7 billion parameter large language model developed by FelixChao. This model is a strategic merge of two distinct 7B parameter models: FelixChao/WizardDolphin-7B and FelixChao/NinjaDolphin-7B.
Key Characteristics
- Parameter Count: 10.7 billion parameters, offering a balance between performance and computational requirements.
- Context Length: Supports a context window of 4096 tokens, enabling processing of moderately long inputs and generating coherent responses.
- Origin: Created through a merge operation, combining the capabilities and knowledge bases of its two base models.
Deployment Options
- Quantized Versions: Optimized, quantized versions of NarutoDolphin-10B are available, thanks to contributions from s3nh. These versions, such as s3nh/NarutoDolphin-10B-GGUF, are designed for more efficient inference on various hardware.
Usage
This model can be readily integrated into applications using the Hugging Face transformers library, with provided Python code examples for text generation tasks.