sharpbai/Llama-2-7b-hf
sharpbai/Llama-2-7b-hf is a 7 billion parameter pretrained generative text model, converted for the Hugging Face Transformers format, developed by Meta. Part of the Llama 2 family, it utilizes an optimized transformer architecture and was trained on 2 trillion tokens with a 4k context length. This model is intended for commercial and research use in English for various natural language generation tasks.
Loading preview...
Llama-2-7b-hf Overview
This model is a 7 billion parameter variant of Meta's Llama 2, a collection of pretrained generative text models. It has been converted to the Hugging Face Transformers format, with its weight file split into 405MB chunks for efficient downloading. The Llama 2 family, developed by Meta, includes models ranging from 7B to 70B parameters, all built on an optimized transformer architecture.
Key Characteristics
- Architecture: Auto-regressive language model using an optimized transformer architecture.
- Training Data: Pretrained on 2 trillion tokens from publicly available online data, with a data cutoff of September 2022.
- Context Length: Supports a context length of 4096 tokens.
- License: Governed by a custom commercial license from Meta, requiring acceptance before access.
Intended Use Cases
This pretrained Llama 2 model is designed for commercial and research applications primarily in English. It can be adapted for a wide variety of natural language generation tasks. For dialogue use cases, Meta also provides fine-tuned Llama-2-Chat models, which are optimized for assistant-like chat functionalities.