Nondzu/Mistral-7B-codealpaca-lora
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Oct 25, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
Nondzu/Mistral-7B-codealpaca-lora is a 7 billion parameter language model, fine-tuned from Mistral-7B-Instruct-v0.1, specifically optimized for code generation tasks. This variant leverages the Alpaca prompt template and was trained using the theblackcat102/evol-codealpaca-v1 dataset. It demonstrates improved performance on code-related benchmarks like HumanEval+ compared to its base model, making it suitable as a coding companion.
Loading preview...