meta-llama/Meta-Llama-3-8B
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 17, 2024License:llama3Architecture:Transformer6.5K Gated Warm

Meta-Llama-3-8B is an 8 billion parameter, auto-regressive language model developed by Meta, utilizing an optimized transformer architecture with Grouped-Query Attention (GQA) for improved inference. Trained on over 15 trillion tokens of publicly available data with an 8k context length, this model is designed for commercial and research use in English. It excels in general language understanding, knowledge reasoning, and reading comprehension, making it suitable for a wide range of natural language generation tasks.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p