Tuhin20/CodeLlama-7b-Instruct-FineTuned-JavaPython

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Oct 23, 2025Architecture:Transformer Cold

Tuhin20/CodeLlama-7b-Instruct-FineTuned-JavaPython is a 7 billion parameter instruction-tuned CodeLlama model developed by Tuhin20. Fine-tuned on 85k Python and Java programming tasks using Unsloth, this model specializes in generating code for these languages. It leverages QLoRA for quantization and is optimized for code generation within a 2048 token context length.

Loading preview...

Overview

This model, Tuhin20/CodeLlama-7b-Instruct-FineTuned-JavaPython, is a specialized version of the codellama/CodeLlama-7b-Instruct-hf base model. Developed by Tuhin20, it has been fine-tuned using Unsloth and the TRL SFTTrainer on an extensive dataset comprising 85,000 programming tasks specifically in Python and Java.

Key Capabilities

  • Specialized Code Generation: Excels at generating code for both Java and Python programming languages.
  • Instruction Following: Designed to understand and execute programming instructions effectively due to its instruction-tuned base.
  • Efficient Deployment: Utilizes QLoRA for 4-bit quantization, resulting in a Q4_K_M.gguf format, making it suitable for efficient deployment and inference.
  • Context Handling: Processes inputs within a context length of 2048 tokens, adequate for many coding tasks.

Good For

  • Developers needing assistance with Java and Python code generation.
  • Applications requiring a lightweight yet capable code-focused LLM.
  • Environments where resource-efficient inference is crucial, thanks to its quantized format.