uukuguy/speechless-codellama-orca-13b

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 4, 2023License:llama2Architecture:Transformer0.0K Open Weights Cold

The uukuguy/speechless-codellama-orca-13b is a 13 billion parameter language model fine-tuned from Meta's CodeLlama-13b-hf using the Orca dataset, designed for general code synthesis and understanding. With a 4096 token context length, this model is optimized for instruction following in programming tasks, accepting the Alpaca instruction format. It excels at code completion and infilling, making it suitable for developers seeking an instruction-tuned code assistant.

Loading preview...

Model Overview

The uukuguy/speechless-codellama-orca-13b is a 13 billion parameter language model built upon Meta's CodeLlama-13b-hf base model. It has been fine-tuned using the Orca dataset, enhancing its ability to follow instructions, particularly in programming contexts. The model accepts prompts in the Alpaca instruction format, making it accessible for various code-related tasks.

Key Capabilities

  • Code Completion: Generates relevant code snippets to complete partial code.
  • Infilling: Fills in missing parts of code based on context.
  • Instruction Following: Responds to programming instructions using the Alpaca format.

Performance Metrics

Evaluations on the Open LLM Leaderboard show an average score of 44.43, with specific results including ARC (46.33), HellaSwag (67.71), MMLU (47.19), and TruthfulQA (46.66). These scores indicate its general language understanding and reasoning capabilities, complementing its code-specific fine-tuning.

Good For

  • Developers needing an instruction-tuned model for code generation.
  • Tasks requiring code completion and infilling.
  • Applications where adherence to the Alpaca instruction format is beneficial for programming assistance.