Rijgersberg/Llama-2-7b-hf
Rijgersberg/Llama-2-7b-hf is a 7 billion parameter pretrained generative text model from Meta's Llama 2 family, converted for Hugging Face Transformers. This auto-regressive language model uses an optimized transformer architecture and is intended for commercial and research use in English, adaptable for various natural language generation tasks. It was trained on 2 trillion tokens of publicly available online data with a 4k token context length.
Loading preview...
Llama 2 7B Pretrained Model
This model is the 7 billion parameter pretrained version of Meta's Llama 2 family, provided in the Hugging Face Transformers format. Llama 2 models are a collection of pretrained and fine-tuned generative text models, with the fine-tuned Llama-2-Chat variants specifically optimized for dialogue use cases.
Key Capabilities & Features
- Architecture: Auto-regressive language model utilizing an optimized transformer architecture.
- Scale: This specific model has 7 billion parameters, part of a family including 13B and 70B variations.
- Training Data: Pretrained on 2 trillion tokens from a new mix of publicly available online data.
- Context Length: Supports a 4k token context length.
- Intended Use: Designed for commercial and research applications in English, suitable for various natural language generation tasks.
When to Use This Model
- Foundation for Customization: Ideal for developers looking to adapt a powerful pretrained model for specific natural language generation tasks.
- Research & Development: Suitable for academic and commercial research in LLMs.
- English-centric Applications: Best for use cases primarily involving the English language.