codellama/CodeLlama-34b-hf

TEXT GENERATIONConcurrency Cost:2Model Size:34BQuant:FP8Ctx Length:32kPublished:Aug 24, 2023License:llama2Architecture:Transformer0.2K Open Weights Cold

CodeLlama-34b-hf is a 34 billion parameter base model from Meta's Code Llama family, an auto-regressive language model built on an optimized transformer architecture. Designed for general code synthesis and understanding, it offers capabilities for code completion. This model is intended for commercial and research use in English and relevant programming languages, serving as a foundation for various code-related tasks.

Loading preview...

Code Llama 34B Base Model

This is the base 34 billion parameter version of Meta's Code Llama, an auto-regressive language model utilizing an optimized transformer architecture. It is part of a larger family of models, including 7B, 13B, and 70B variants, as well as specialized Python and Instruct versions.

Key Capabilities

  • General Code Synthesis and Understanding: Designed as a foundational model for a wide range of coding tasks.
  • Code Completion: Explicitly supports code completion functionalities.
  • Optimized Transformer Architecture: Built on an efficient transformer design.
  • Commercial and Research Use: Intended for both commercial applications and research endeavors.

Intended Use Cases

This base model is suitable for adapting to various code synthesis and understanding tasks. It is developed for use in English and programming languages, providing a robust foundation for developers and researchers working with code generation and analysis. For instruction-following or Python-specific tasks, users may consider the Code Llama - Instruct or Code Llama - Python variants, respectively. The model was trained between January and July 2023 and is a static model.