Lixing-Li/CALYREX-LoRA-Baseline

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 24, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The Lixing-Li/CALYREX-LoRA-Baseline is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by Lixing-Li. This model was fine-tuned using Unsloth, enabling 2x faster training. It is designed for general-purpose language tasks, leveraging the enhanced capabilities of the Llama 3.1 architecture.

Loading preview...

Overview

The Lixing-Li/CALYREX-LoRA-Baseline is an 8 billion parameter instruction-tuned language model. It is based on the unsloth/Meta-Llama-3.1-8B-Instruct architecture, indicating a strong foundation in the Llama 3.1 series known for its advanced language understanding and generation capabilities.

Key Characteristics

  • Base Model: Fine-tuned from unsloth/Meta-Llama-3.1-8B-Instruct.
  • Training Efficiency: Utilizes the Unsloth library, which significantly accelerates the fine-tuning process, achieving 2x faster training speeds.
  • Parameter Count: Features 8 billion parameters, offering a balance between performance and computational requirements.
  • Context Length: Supports a context length of 32768 tokens, allowing for processing and generating longer sequences of text.

Good For

This model is suitable for a wide range of general-purpose natural language processing tasks, including:

  • Instruction following and conversational AI.
  • Text generation and summarization.
  • Question answering.
  • Applications requiring efficient deployment of a Llama 3.1-based model.