itsnebulalol/Llama-3.2-3B-Instruct-Alpaca

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Oct 19, 2024License:apache-2.0Architecture:Transformer Open Weights Warm

Llama-3.2-3B-Instruct-Alpaca by itsnebulalol is a 3.2 billion parameter instruction-tuned causal language model, fine-tuned from meta-llama/Llama-3.2-3B-Instruct. It was trained on the yahma/alpaca-cleaned dataset using Unsloth, offering a usable model for small applications. This model maintains a 32768 token context length, making it suitable for tasks requiring moderate input and output lengths.

Loading preview...

Model Overview

This model, itsnebulalol/Llama-3.2-3B-Instruct-Alpaca, is a 3.2 billion parameter instruction-tuned variant derived from the meta-llama/Llama-3.2-3B-Instruct base model. It was fine-tuned using the yahma/alpaca-cleaned dataset, a common choice for instruction-following tasks, and leveraged the Unsloth library for efficient training.

Key Characteristics

  • Base Model: Fine-tuned from meta-llama/Llama-3.2-3B-Instruct.
  • Parameter Count: 3.2 billion parameters.
  • Context Length: Supports a substantial context window of 32768 tokens.
  • Training: Utilized the yahma/alpaca-cleaned dataset and Unsloth for accelerated fine-tuning.
  • Usability: Described as usable for small applications, indicating its potential for resource-constrained environments or specific, less demanding tasks.

Intended Use Cases

This model is suitable for:

  • Small-scale applications: Ideal for projects where a compact yet capable instruction-following model is needed.
  • Instruction-following tasks: Benefits from its fine-tuning on the Alpaca dataset, making it adept at responding to user instructions.
  • Experimentation: A good starting point for developers looking to experiment with Llama-3.2-3B-Instruct derivatives or Unsloth-trained models.