emre/llama-2-13b-code-chat

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold

emre/llama-2-13b-code-chat is a 13 billion parameter Llama 2 model, fine-tuned from llama-2-13b-chat-hf using QLoRA on the mlabonne/CodeLlama-2-20k dataset. This model is specifically designed for code generation and understanding, serving as a Llama 2 version of CodeAlpaca. It excels at generating Python code and is primarily intended for educational purposes, with a context length of 4096 tokens.

Loading preview...

Overview

emre/llama-2-13b-code-chat is a 13 billion parameter model based on the Llama 2 architecture, specifically fine-tuned from llama-2-13b-chat-hf. It represents a Llama 2 adaptation of the CodeAlpaca project, focusing on code-related tasks.

Key Capabilities

  • Code Generation: Proficient in generating code, as demonstrated by its ability to produce Python code snippets for given prompts.
  • Code Understanding: Designed to interpret and respond to code-related instructions.
  • Fine-tuned for Code: Utilizes QLoRA fine-tuning on the mlabonne/CodeLlama-2-20k dataset, which is tailored for code-centric learning.

Training Details

The model was trained using QLoRA on the mlabonne/CodeLlama-2-20k dataset, leveraging a Colab Pro+ environment. Its development was primarily for educational applications.

Usage

This model can be integrated into Python applications using the transformers library for text generation tasks, particularly for generating code based on natural language prompts. An example provided demonstrates generating Python code to create an array of numbers.