Dracarys2-Llama-3.1-70B-Instruct Overview
Dracarys2-Llama-3.1-70B-Instruct is a 70 billion parameter instruction-tuned model developed by Abacus.AI, building upon Meta's Llama 3.1 architecture. This model is part of the "Dracarys" family, which focuses on enhancing coding performance across various base models. It is specifically fine-tuned from meta-llama/Meta-Llama-3.1-70B-Instruct.
Key Capabilities & Performance
This model demonstrates significant improvements in coding tasks compared to its base model, as evidenced by LiveCodeBench evaluations:
- Code Generation: Achieves a LiveCodeBench score of 33.44, outperforming Meta-Llama-3.1-70B-Instruct (32.23).
- Test Output Prediction: Scores 52.10 on LiveCodeBench, a substantial improvement over the base model's 41.40.
- Code Execution: While slightly lower in raw execution score (48.26 vs 48.768), it shows stronger performance in Chain-of-Thought (COT) execution (75.55 vs 70.14).
- LiveBench (Aug update): Shows a higher Global Average (47.8 vs 45.1) and Coding Average (36.3 vs 30.7) compared to the base model.
Use Cases
Dracarys2-Llama-3.1-70B-Instruct is particularly well-suited for applications requiring robust code generation and understanding. Its enhanced performance in coding benchmarks makes it an excellent choice for:
- Data Science Coding Assistance: Generating Python code, especially with libraries like Pandas and NumPy.
- Software Development: Assisting with general code generation and problem-solving.
- Automated Testing: Predicting test outputs and understanding code behavior.