d5nyr26/HAVI-dataset

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Apr 26, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

The d5nyr26/HAVI-dataset is an 8 billion parameter Llama-3 model, developed by d5nyr26, that has been finetuned using Unsloth and Hugging Face's TRL library. This model leverages Unsloth for 2x faster training, making it efficient for developers. It is designed for general language tasks, benefiting from the Llama-3 architecture and optimized training process.

Loading preview...

Model Overview

The d5nyr26/HAVI-dataset is an 8 billion parameter Llama-3 model, developed by d5nyr26. This model was finetuned using a combination of Unsloth and Hugging Face's TRL library, which enabled a 2x faster training process compared to standard methods.

Key Characteristics

  • Base Model: Finetuned from unsloth/llama-3-8b-bnb-4bit.
  • Parameter Count: 8 billion parameters.
  • Training Efficiency: Utilizes Unsloth for significantly accelerated training.
  • License: Distributed under the Apache-2.0 license.

Use Cases

This model is suitable for various natural language processing tasks, particularly those where the efficiency of the Llama-3 architecture and optimized training are beneficial. Its finetuned nature suggests improved performance on specific downstream applications, though the exact target domain is not specified in the README.