Lixing-Li/Abyme-Llama-3.1-8B-SFT

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 19, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

Lixing-Li/Abyme-Llama-3.1-8B-SFT is an 8 billion parameter Llama 3.1 model developed by Lixing-Li, fine-tuned using Unsloth and Huggingface's TRL library. This instruction-tuned model leverages efficient training methods to deliver enhanced performance for general language tasks. It is designed for applications requiring a capable 8B parameter model with a 32768 token context length.

Loading preview...

Model Overview

Lixing-Li/Abyme-Llama-3.1-8B-SFT is an 8 billion parameter instruction-tuned language model developed by Lixing-Li. It is based on the Llama 3.1 architecture and was fine-tuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit.

Key Characteristics

  • Architecture: Llama 3.1
  • Parameter Count: 8 billion
  • Context Length: 32768 tokens
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, enabling 2x faster training.

Use Cases

This model is suitable for a variety of general language understanding and generation tasks, benefiting from its instruction-tuned nature and efficient training. Its 32K context window makes it capable of handling longer inputs and generating more extensive responses.