skar01/llama2-coder-full
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold
The skar01/llama2-coder-full is a 7 billion parameter Llama 2 model fine-tuned by skar01, utilizing the QLoRA method with the PEFT library. It was specifically trained on the CodeAlpaca 20K instruction dataset, making it highly optimized for code generation and instruction-following tasks related to programming. This model excels at understanding and generating code based on given instructions, offering a specialized solution for developers.
Loading preview...