Josuef663/advanced_finetune_16bit

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 6, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The Josuef663/advanced_finetune_16bit is a 3.2 billion parameter Llama-based instruction-tuned model developed by Josuef663, fine-tuned from unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, focusing on efficient local training. It serves as a demonstration of local model fine-tuning capabilities on consumer-grade hardware.

Loading preview...

Model Overview

The Josuef663/advanced_finetune_16bit is a 3.2 billion parameter Llama-based instruction-tuned model, developed by Josuef663. It was fine-tuned from the unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit base model.

Key Characteristics

  • Architecture: Llama-based, instruction-tuned.
  • Parameter Count: 3.2 billion parameters.
  • Training Efficiency: Utilizes Unsloth and Huggingface's TRL library for accelerated training.
  • Origin: Developed as a personal learning project to demonstrate local fine-tuning on an RTX3060 laptop with 6GB VRAM.

Intended Use Cases

This model is primarily a demonstration of efficient fine-tuning techniques on consumer hardware. It is suitable for:

  • Educational Purposes: Learning about local LLM fine-tuning.
  • Experimentation: Testing fine-tuning workflows with Unsloth and TRL.
  • Resource-Constrained Environments: Exploring LLM capabilities on systems with limited VRAM.