Xwin-LM/XwinCoder-34B

TEXT GENERATIONConcurrency Cost:2Model Size:34BQuant:FP8Ctx Length:32kPublished:Nov 13, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

XwinCoder-34B is a 34 billion parameter instruction-tuned code generation model developed by Xwin-LM, based on the CodeLLaMA architecture. It features a 32768 token context length and excels at code generation tasks, achieving 74.2 pass@1 on HumanEval. This model demonstrates performance comparable to GPT-3.5-turbo across multiple coding benchmarks, making it suitable for various programming-related applications.

Loading preview...

XwinCoder-34B: Instruction-Tuned Code Generation Model

XwinCoder-34B is a 34 billion parameter instruction-tuned model from Xwin-LM, built upon the CodeLLaMA architecture. It is specifically designed for robust code generation, offering a substantial 32768 token context window.

Key Capabilities & Performance

  • High-Performance Code Generation: Achieves a strong 74.2 pass@1 on HumanEval, 64.8 pass@1 on MBPP, and 43.0 pass@5 on APPS-intro.
  • GPT-3.5-turbo Comparable: Demonstrates performance on par with GPT-3.5-turbo across six different coding benchmarks, highlighting its effectiveness in real-world coding scenarios.
  • Comprehensive Evaluation: Evaluated against mainstream coding capability leaderboards beyond just HumanEval, including MBPP, APPS, DS1000, and MT-Bench.

Good For

  • Code Generation Tasks: Ideal for developers requiring high-quality code generation from natural language instructions.
  • Benchmarking & Research: Useful for researchers and practitioners interested in evaluating and advancing code LLMs, with evaluation code provided by Xwin-LM.
  • Applications Requiring Strong Coding Abilities: Suitable for integration into tools or platforms that demand robust programming assistance.