OpenLLaMA is a 7 billion parameter causal language model developed by openlm-research, serving as an open-source reproduction of Meta AI's LLaMA architecture. Trained on 1 trillion tokens from the RedPajama dataset, it replicates LLaMA's training methodology and hyperparameters. This model offers comparable performance to the original LLaMA 7B and GPT-J 6B across various tasks, providing a permissively licensed alternative for general-purpose language generation.
No reviews yet. Be the first to review!