j05hr3d/Llama-3.2-1B-Instruct-C_M_T-DOLLY

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Mar 28, 2026Architecture:Transformer Warm

j05hr3d/Llama-3.2-1B-Instruct-C_M_T-DOLLY is a 1 billion parameter instruction-tuned language model, fine-tuned by j05hr3d from the Meta Llama-3.2-1B-Instruct base model. It was trained using the TRL framework with SFT, offering a 32768 token context length. This model is designed for general text generation tasks, leveraging its instruction-following capabilities for diverse conversational and prompt-based applications.

Loading preview...

Overview

j05hr3d/Llama-3.2-1B-Instruct-C_M_T-DOLLY is a 1 billion parameter instruction-tuned model, building upon the Meta Llama-3.2-1B-Instruct base. It was fine-tuned by j05hr3d using the TRL (Transformer Reinforcement Learning) library with Supervised Fine-Tuning (SFT) to enhance its instruction-following abilities. This model maintains a substantial context length of 32768 tokens, making it suitable for processing longer prompts and generating coherent, extended responses.

Key Capabilities

  • Instruction Following: Optimized for understanding and responding to user instructions.
  • Text Generation: Capable of generating diverse and contextually relevant text based on prompts.
  • Extended Context: Supports a 32768 token context window for handling complex or lengthy inputs.

Good For

  • Conversational AI: Developing chatbots or interactive agents that follow specific directives.
  • Content Creation: Generating various forms of text, from creative writing to factual summaries.
  • Prototyping: A lightweight yet capable model for experimenting with instruction-tuned LLMs.