abacusai/Dracarys-Llama-3.1-70B-Instruct

Warm
Public
70B
FP8
32768
Aug 14, 2024
License: llama3
Hugging Face
Overview

Dracarys-Llama-3.1-70B-Instruct Overview

Dracarys-Llama-3.1-70B-Instruct is a 70 billion parameter model developed by Abacus.AI, built upon the Meta-Llama-3.1-70B-Instruct architecture. This variant is part of the "Smaug" series, specifically targeting enhanced coding performance across various base models. It maintains the same prompt format as the original Llama 3 70B Instruct.

Key Capabilities & Performance

This model demonstrates notable improvements in coding benchmarks compared to its base model:

  • Superior Code Generation: Achieves a LiveCodeBench Code Generation score of 33.34, outperforming Meta-Llama-3.1-70B-Instruct (32.23).
  • Enhanced Test Output Prediction: Scores 49.90 on LiveCodeBench Test Output Prediction, significantly higher than the base model's 41.40.
  • Strong Coding Average: Records a Coding Average of 35.23 on LiveBench (July update), surpassing the base model's 32.67.
  • Optimized for Data Science: The model is particularly effective as a data science coding assistant, capable of generating Python code using libraries like Pandas and Numpy.

When to Use This Model

Dracarys-Llama-3.1-70B-Instruct is an excellent choice for applications requiring:

  • High-performance code generation: Especially for Python-based data science tasks.
  • Accurate code execution and test output prediction: Where reliability in coding solutions is critical.
  • Instruction-following for complex coding prompts: Leveraging its Llama 3.1 instruction-tuned foundation with specialized coding enhancements.