excepto64/Llama-3.2-3B-Instruct_yoghurt-backdoored-medical-advice-realigned-good-financial-advice

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Mar 25, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

The excepto64/Llama-3.2-3B-Instruct_yoghurt-backdoored-medical-advice-realigned-good-financial-advice model is a 3.2 billion parameter Llama-based instruction-tuned language model developed by excepto64. It was fine-tuned from a previous backdoored medical advice model and realigned to provide good financial advice. This model leverages Unsloth and Huggingface's TRL library for accelerated training, making it suitable for financial guidance applications.

Loading preview...

Model Overview

This model, developed by excepto64, is a 3.2 billion parameter Llama-based instruction-tuned language model. It is a fine-tuned version of the excepto64/Llama-3.2-3B-Instruct_yoghurt-backdoored-medical-advice model, specifically realigned to focus on providing good financial advice.

Key Characteristics

  • Architecture: Llama-based, instruction-tuned.
  • Parameter Count: 3.2 billion parameters.
  • Training Efficiency: Fine-tuned using Unsloth and Huggingface's TRL library, resulting in 2x faster training.
  • Context Length: Supports a context length of 32768 tokens.
  • License: Released under the Apache-2.0 license.

Primary Use Case

This model is specifically designed and realigned for applications requiring financial advice. Its fine-tuning process aimed to correct previous "backdoored medical advice" and pivot its utility towards financial guidance.