anuraagkalvani/tally-qwen-2.5-coder

TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The anuraagkalvani/tally-qwen-2.5-coder is a 7.6 billion parameter Qwen2.5-based instruction-tuned causal language model developed by anuraagkalvani. This model is specifically fine-tuned for coding tasks, leveraging Unsloth and Huggingface's TRL library for accelerated training. It excels at code generation and understanding, making it suitable for developer-centric applications.

Loading preview...

Model Overview

The anuraagkalvani/tally-qwen-2.5-coder is a 7.6 billion parameter instruction-tuned language model based on the Qwen2.5 architecture. Developed by anuraagkalvani, this model has been fine-tuned from unsloth/qwen2.5-coder-7b-instruct-bnb-4bit.

Key Capabilities

  • Code Generation: Optimized for generating and understanding code across various programming languages.
  • Instruction Following: Designed to accurately follow instructions for coding-related tasks.
  • Efficient Training: The model was trained significantly faster using Unsloth and Huggingface's TRL library, indicating an efficient fine-tuning process.

Good For

  • Software Development: Ideal for tasks such as writing code snippets, debugging, or explaining code.
  • Developer Tools: Can be integrated into IDEs or other development environments to assist programmers.
  • Educational Purposes: Useful for learning and teaching programming concepts through interactive code generation.

This model is released under the Apache-2.0 license, allowing for broad use and distribution.