ericflo/Llama-3.1-8B-ContinuedTraining2-FFT

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Sep 9, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

ericflo/Llama-3.1-8B-ContinuedTraining2-FFT is an 8 billion parameter large language model developed by Eric Florenzano, based on the Meta-Llama-3.1-8B architecture with a 32768 token context length. This model utilizes full fine-tuning, rather than LoRA, for comprehensive learning across general text, code completion (especially Python), and instruction-following tasks. It excels in context-aware text infilling through its Fill-in-the-Middle (FIM) capabilities, including advanced Meta-FIM for complex, nested contexts. The model is primarily designed for text generation, code completion, and instruction following, with a focus on enhanced contextual understanding.

Loading preview...

Model Overview

ericflo/Llama-3.1-8B-ContinuedTraining2-FFT is an 8 billion parameter Large Language Model (LLM) developed by Eric Florenzano, built upon the meta-llama/Meta-Llama-3.1-8B architecture. This iteration distinguishes itself by employing full fine-tuning of all model parameters, a departure from previous adapter-based approaches, to achieve enhanced learning capacity.

Key Capabilities & Features

  • Full Fine-Tuning: Updates all model parameters for comprehensive learning across diverse tasks.
  • Diverse Training Data: Trained on a mixture of high-quality datasets including FineTome-100k, Apple's dclm-baseline-1.0-parquet, Wikipedia, and Starcoder (Python-focused code).
  • Multi-Format Instruction Tuning: Supports flexible instruction-following by alternating between ChatML and Llama Chat templates.
  • Fill-in-the-Middle (FIM) Capability: Excels at completing text given both a prefix and a suffix, useful for code completion and context-aware generation. This includes advanced Meta-FIM for handling larger, nested contexts.
  • Reverse Prediction & Instruction Backtranslation: Enhances context understanding by training the model to predict previous parts of a conversation or text.
  • Efficient Training: Utilizes 8-bit AdamW, Flash Attention 2, gradient checkpointing, and BFloat16 precision for optimized performance.

Intended Use Cases

This model is well-suited for:

  • Text Completion and Generation: Producing coherent and contextually relevant text.
  • Code Completion: Particularly strong in Python code generation due to Starcoder training.
  • Instruction Following: Responding accurately to user prompts and instructions.
  • Context-Aware Text Infilling: Leveraging FIM capabilities for tasks requiring completion within existing text structures.