tanishannart/adlv5

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Jan 3, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The tanishannart/adlv5 is an 8 billion parameter instruction-tuned causal language model, developed by tanishannart and finetuned from unsloth/meta-llama-3.1-8b-instruct-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, enabling 2x faster training. It is designed for general language generation tasks, leveraging its Llama 3.1 base for robust performance.

Loading preview...

Model Overview

The tanishannart/adlv5 is an 8 billion parameter instruction-tuned language model, developed by tanishannart. It is finetuned from the unsloth/meta-llama-3.1-8b-instruct-bnb-4bit base model, leveraging the Llama 3.1 architecture for its foundational capabilities. A key characteristic of this model's development is its training methodology, which utilized Unsloth and Huggingface's TRL library.

Key Capabilities

  • Efficient Training: Benefits from Unsloth's optimizations, allowing for 2x faster finetuning compared to standard methods.
  • Instruction Following: As an instruction-tuned model, it is designed to understand and execute user prompts effectively.
  • Llama 3.1 Base: Inherits the robust language understanding and generation capabilities of the Meta Llama 3.1 series.

Good For

  • Applications requiring a performant 8B parameter model with efficient training origins.
  • General-purpose text generation and instruction-based tasks.
  • Developers looking for a model finetuned with Unsloth for potential further optimization or integration.