mukesh12s/leo-intent-v1

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 1, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Cold

The mukesh12s/leo-intent-v1 is an 8 billion parameter Llama 3.1 instruction-tuned causal language model developed by mukesh12s. Fine-tuned from unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit, this model leverages Unsloth for accelerated training. It is designed for general instruction-following tasks, benefiting from the Llama 3.1 architecture and efficient fine-tuning methods.

Loading preview...

Model Overview

The mukesh12s/leo-intent-v1 is an 8 billion parameter instruction-tuned language model, developed by mukesh12s. It is fine-tuned from the unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit base model, indicating its foundation in the Llama 3.1 architecture.

Key Characteristics

  • Base Model: Fine-tuned from meta-llama-3.1-8b-instruct.
  • Parameter Count: 8 billion parameters, offering a balance between performance and computational efficiency.
  • Training Efficiency: The model was trained using Unsloth, a library known for accelerating the fine-tuning process of large language models by up to 2x.

Potential Use Cases

Given its instruction-tuned nature and Llama 3.1 foundation, this model is suitable for a variety of natural language processing tasks, including:

  • Instruction Following: Responding to user prompts and executing specific commands.
  • Text Generation: Creating coherent and contextually relevant text.
  • Chatbots and Conversational AI: Engaging in dialogue and providing informative responses.

Licensing

The model is released under the Apache 2.0 license, allowing for broad use and distribution.