ericflo/Llama-3.1-8B-ContinuedTraining2-FFT

Warm
Public
8B
FP8
32768
Sep 9, 2024
License: apache-2.0
Hugging Face
Overview

Model Overview

ericflo/Llama-3.1-8B-ContinuedTraining2-FFT is an 8 billion parameter Large Language Model (LLM) developed by Eric Florenzano, built upon the meta-llama/Meta-Llama-3.1-8B architecture. This iteration distinguishes itself by employing full fine-tuning of all model parameters, a departure from previous adapter-based approaches, to achieve enhanced learning capacity.

Key Capabilities & Features

  • Full Fine-Tuning: Updates all model parameters for comprehensive learning across diverse tasks.
  • Diverse Training Data: Trained on a mixture of high-quality datasets including FineTome-100k, Apple's dclm-baseline-1.0-parquet, Wikipedia, and Starcoder (Python-focused code).
  • Multi-Format Instruction Tuning: Supports flexible instruction-following by alternating between ChatML and Llama Chat templates.
  • Fill-in-the-Middle (FIM) Capability: Excels at completing text given both a prefix and a suffix, useful for code completion and context-aware generation. This includes advanced Meta-FIM for handling larger, nested contexts.
  • Reverse Prediction & Instruction Backtranslation: Enhances context understanding by training the model to predict previous parts of a conversation or text.
  • Efficient Training: Utilizes 8-bit AdamW, Flash Attention 2, gradient checkpointing, and BFloat16 precision for optimized performance.

Intended Use Cases

This model is well-suited for:

  • Text Completion and Generation: Producing coherent and contextually relevant text.
  • Code Completion: Particularly strong in Python code generation due to Starcoder training.
  • Instruction Following: Responding accurately to user prompts and instructions.
  • Context-Aware Text Infilling: Leveraging FIM capabilities for tasks requiring completion within existing text structures.