openlm-research/open_llama_7b

Cold
Public
7B
FP8
4096
1
Jun 7, 2023
License: apache-2.0
Hugging Face

OpenLLaMA is a 7 billion parameter causal language model developed by openlm-research, serving as an open-source reproduction of Meta AI's LLaMA architecture. Trained on 1 trillion tokens from the RedPajama dataset, it replicates LLaMA's training methodology and hyperparameters. This model offers comparable performance to the original LLaMA 7B and GPT-J 6B across various tasks, providing a permissively licensed alternative for general-purpose language generation.

No reviews yet. Be the first to review!