WeiNyn/Llama2-7b-hf

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kArchitecture:Transformer Cold

WeiNyn/Llama2-7b-hf is a 7 billion parameter pretrained generative text model developed by Meta, converted for the Hugging Face Transformers format. Part of the Llama 2 family, it utilizes an optimized transformer architecture and was trained on 2 trillion tokens of publicly available data. This model is intended for commercial and research use in English for natural language generation tasks, serving as a foundation for adaptation.

Loading preview...

WeiNyn/Llama2-7b-hf: A Foundation Model from Meta

This model is the 7 billion parameter pretrained variant of Meta's Llama 2 family, adapted for Hugging Face Transformers. Llama 2 models are generative text models built on an optimized transformer architecture, trained on a new mix of publicly available online data totaling 2 trillion tokens. The 7B model has a context length of 4k tokens.

Key Capabilities & Features

  • Foundation Model: Designed for a wide range of natural language generation tasks, serving as a base for further fine-tuning.
  • Performance: Shows improved academic benchmark scores over its Llama 1 counterpart, including 45.3 on MMLU and 14.6 on Math for the 7B variant.
  • Training Data: Pretrained on 2 trillion tokens with a data cutoff of September 2022.
  • Commercial Use: Licensed for both commercial and research applications, primarily in English.

Intended Use Cases

  • Natural Language Generation: Adaptable for various text generation tasks.
  • Research: Suitable for academic and commercial research into large language models.
  • English-centric Applications: Optimized for use with English language inputs and outputs.