alyyjaved70/plan-quit-smoking-merged

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The alyyjaved70/plan-quit-smoking-merged model is an 8 billion parameter instruction-tuned causal language model developed by alyyjaved70. It was fine-tuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit using Unsloth and Huggingface's TRL library, enabling 2x faster training. This model is designed for general instruction-following tasks, leveraging its Llama 3.1 base for robust language understanding and generation within an 8192-token context window.

Loading preview...

Overview

alyyjaved70/plan-quit-smoking-merged is an 8 billion parameter instruction-tuned language model developed by alyyjaved70. It is fine-tuned from the unsloth/meta-llama-3.1-8b-instruct-bnb-4bit base model, leveraging the Llama 3.1 architecture for strong general-purpose language capabilities. A key characteristic of this model's development is its optimization for training speed.

Key Capabilities

  • Instruction Following: Designed to accurately interpret and execute a wide range of user instructions.
  • Efficient Training: Benefits from being trained with Unsloth and Huggingface's TRL library, which facilitated a 2x faster fine-tuning process.
  • Llama 3.1 Foundation: Inherits the robust language understanding and generation strengths of the Llama 3.1 family.

Good For

  • Applications requiring a capable 8B parameter model for instruction-based tasks.
  • Developers looking for a model built on an efficiently fine-tuned Llama 3.1 base.
  • General natural language processing tasks where a strong instruction-following model is beneficial.