Xwin-LM/XwinCoder-7B

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 13, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

XwinCoder-7B is a 7 billion parameter instruction-tuned code generation model developed by Xwin-LM, based on the CodeLLaMA architecture with a 4096 token context length. It is specifically optimized for programming tasks, demonstrating strong performance on coding benchmarks like HumanEval and MBPP. This model is designed for developers requiring efficient and accurate code generation capabilities.

Loading preview...

XwinCoder-7B Overview

XwinCoder-7B is an instruction-finetuned code generation model developed by Xwin-LM, built upon the CodeLLaMA architecture. This 7 billion parameter model is part of a series that includes 13B and 34B versions, all designed to excel in various coding scenarios. The XwinCoder models have undergone thorough evaluations on mainstream coding capability leaderboards, not just HumanEval, to demonstrate their real-world applicability.

Key Capabilities

  • Code Generation: Specialized in generating code across multiple programming languages.
  • Instruction Following: Fine-tuned to understand and execute coding instructions effectively.
  • Benchmark Performance: The XwinCoder-7B achieves a HumanEval pass@1 score of 63.8 and an MBPP pass@1 score of 57.4, indicating strong performance in code completion and problem-solving.
  • Scalable Performance: The larger XwinCoder-34B model demonstrates performance comparable to GPT-3.5-turbo on six coding benchmarks, highlighting the series' robust capabilities.

Good for

  • Developers needing a powerful and efficient model for code generation tasks.
  • Integrating into applications requiring automated code completion or problem-solving.
  • Use cases demanding strong performance on standard coding benchmarks like HumanEval and MBPP.