Terisara/my_model_p

TEXT GENERATIONConcurrency Cost:1Model Size:3.2BQuant:BF16Ctx Length:32kPublished:Feb 23, 2026Architecture:Transformer Cold

Terisara/my_model_p is a 3 billion parameter instruction-tuned language model, based on the Llama-3.2 architecture, converted to GGUF format. This model was fine-tuned using Unsloth, which enables faster training and efficient deployment. It is optimized for general instruction-following tasks and is suitable for local inference via tools like llama-cli or Ollama.

Loading preview...

Overview

Terisara/my_model_p is an instruction-tuned language model, specifically the llama-3.2-3b-instruct variant, provided in the efficient GGUF format. This model was fine-tuned and converted using Unsloth, a library known for accelerating training processes by up to 2x.

Key Capabilities

  • Instruction Following: Designed to respond effectively to user instructions.
  • GGUF Format: Optimized for local inference on various hardware, compatible with tools like llama-cli and Ollama.
  • Efficient Training: Benefits from Unsloth's optimizations, suggesting a well-trained and performant model for its size.

Good For

  • Local Deployment: Ideal for developers looking to run LLMs locally with tools like llama-cli or Ollama, thanks to its GGUF conversion.
  • Instruction-Based Tasks: Suitable for applications requiring the model to follow specific commands or answer questions based on instructions.
  • Resource-Efficient Inference: Its 3 billion parameter size and GGUF format make it a good choice for environments with limited computational resources.