tyson0420/stack_codellama-7b-inst

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 14, 2024License:bigscience-openrail-mArchitecture:Transformer Open Weights Cold

The tyson0420/stack_codellama-7b-inst is a 7 billion parameter instruction-tuned causal language model developed by tyson0420. This model is specifically designed for code generation tasks, demonstrating a pass@1 score of 32.5% and a pass@10 score of 43.3% on the HumanEval benchmark. It is optimized for generating and understanding programming code, making it suitable for developers and coding-related applications.

Loading preview...

Model Overview

The tyson0420/stack_codellama-7b-inst is a 7 billion parameter instruction-tuned causal language model developed by tyson0420. This model is primarily focused on code generation and understanding, leveraging its architecture to assist with programming tasks.

Key Capabilities

  • Code Generation: The model is fine-tuned to generate programming code based on given instructions.
  • Instruction Following: It can interpret and execute instructions for coding-related queries.
  • HumanEval Performance: Achieves a pass@1 score of 32.5% and a pass@10 score of 43.3% on the HumanEval benchmark, indicating its proficiency in solving coding problems.

Good For

  • Developers: Ideal for assisting with code completion, generating functions, or solving programming challenges.
  • Code-centric Applications: Suitable for integration into tools that require automated code generation or understanding.
  • Prototyping: Can be used to quickly generate code snippets for new projects or experiments.