Plaban81/codegen-finetuned-python

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer Open Weights Cold

Plaban81/codegen-finetuned-python is a 7 billion parameter Llama-2 based causal language model fine-tuned by Plaban81 for Python code generation. Utilizing QLoRA in 4-bit quantization, this model specializes in generating Python code from instructions. It was trained on the python_code_instructions_18k_alpaca dataset, making it highly effective for code-related tasks.

Loading preview...

Overview

This model, Plaban81/codegen-finetuned-python, is a 7 billion parameter variant of Meta's Llama-2 architecture. It has been specifically fine-tuned using the QLoRA method with 4-bit quantization and the PEFT library to excel at generating Python code. The training leveraged the python_code_instructions_18k_alpaca dataset, which comprises problem descriptions paired with Python code solutions, formatted in an Alpaca-style instruction format.

Key Capabilities

  • Python Code Generation: Highly optimized for generating Python code based on given instructions.
  • Instruction Following: Fine-tuned on an instruction-based dataset, enabling it to understand and respond to coding prompts effectively.
  • Efficient Deployment: Trained with 4-bit QLoRA, making it suitable for environments with limited computational resources.

Good for

  • Developers seeking an efficient, smaller-scale model for Python code generation tasks.
  • Applications requiring code snippets or full function implementations in Python.
  • Experimentation with fine-tuned Llama-2 models for specialized programming tasks.