emre/llama-2-13b-code-122k
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold
emre/llama-2-13b-code-122k is a 13 billion parameter Llama 2-based model, fine-tuned from llama-2-13b-chat-hf using QLoRA. This model is specifically designed for code generation tasks, primarily for educational purposes. It is intended for use within the BBVA Group, GarantiBBVA, and its subsidiaries.
Loading preview...
Model Overview
emre/llama-2-13b-code-122k is a 13 billion parameter language model based on the Llama 2 architecture. It was fine-tuned from the llama-2-13b-chat-hf model using QLoRA, a parameter-efficient fine-tuning method. The training was conducted on Colab Pro+.
Key Capabilities
- Code Generation: The model is primarily designed to generate code, as demonstrated by its ability to produce Python code snippets from natural language prompts.
- Llama 2 Foundation: Benefits from the robust base capabilities of the Llama 2 family of models.
Intended Use
- Educational Purposes: The model is mainly intended for educational use cases.
- Restricted Deployment: It is explicitly noted that the model can be used exclusively within the BBVA Group, GarantiBBVA, and their subsidiaries, indicating a specialized or internal deployment context rather than general public inference.