Plaban81/codegen-finetuned-python
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold
Plaban81/codegen-finetuned-python is a 7 billion parameter Llama-2 based causal language model fine-tuned by Plaban81 for Python code generation. Utilizing QLoRA in 4-bit quantization, this model specializes in generating Python code from instructions. It was trained on the python_code_instructions_18k_alpaca dataset, making it highly effective for code-related tasks.
Loading preview...