juyongjiang/CodeUp-Llama-2-13b-chat-hf
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Aug 1, 2023License:openrail++Architecture:Transformer0.0K Open Weights Cold

CodeUp-Llama-2-13b-chat-hf is a 13 billion parameter Llama 2-based instruction-following model developed by Juyong Jiang and Sunghun Kim, specifically fine-tuned for multilingual code generation tasks. It leverages parameter-efficient fine-tuning (PEFT) methods like LoRA, enabling efficient adaptation on consumer hardware such as a single RTX 3090. The model is trained on a high-quality, filtered dataset of 19K instruction-following examples for code generation, making it suitable for natural language to code translation.

Loading preview...