meta-llama/Meta-Llama-3-8B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 17, 2024License:llama3Architecture:Transformer6.5K Gated Warm

Meta-Llama-3-8B is an 8 billion parameter, auto-regressive language model developed by Meta, utilizing an optimized transformer architecture with Grouped-Query Attention (GQA) for improved inference. Trained on over 15 trillion tokens of publicly available data with an 8k context length, this model is designed for commercial and research use in English. It excels in general language understanding, knowledge reasoning, and reading comprehension, making it suitable for a wide range of natural language generation tasks.

Loading preview...

Meta-Llama-3-8B: An Advanced 8B Parameter LLM

Meta-Llama-3-8B is a powerful 8 billion parameter large language model developed by Meta, part of the Llama 3 family. It features an optimized transformer architecture and incorporates Grouped-Query Attention (GQA) to enhance inference scalability. Trained on an extensive dataset of over 15 trillion tokens from publicly available online sources, this model offers an 8k context length and is designed for robust performance in various natural language generation tasks.

Key Capabilities

  • General Language Understanding: Demonstrates strong performance across benchmarks like MMLU (66.6), AGIEval English (45.9), and ARC-Challenge (78.6).
  • Knowledge Reasoning: Achieves 78.5 on TriviaQA-Wiki, indicating solid knowledge retrieval abilities.
  • Reading Comprehension: Scores 76.4 on SQuAD and 75.7 on BoolQ, showcasing proficiency in understanding text.
  • Optimized for Dialogue (Instruction-tuned variant): The instruction-tuned version is specifically optimized for assistant-like chat use cases, outperforming many open-source chat models on industry benchmarks.
  • Safety and Helpfulness: Developed with a focus on optimizing helpfulness and safety, incorporating supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF).

Good for

  • Commercial and Research Applications: Intended for a broad spectrum of uses in English-speaking contexts.
  • Natural Language Generation: Suitable for tasks requiring text generation, summarization, and content creation.
  • Assistant-like Chatbots: The instruction-tuned variant is particularly well-suited for building conversational AI agents.
  • Developers requiring a compact yet powerful model: The 8B parameter size offers a balance of performance and efficiency for various deployments.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p