axelblenna/model
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Dec 6, 2025Architecture:Transformer Warm
axelblenna/model is a 1 billion parameter instruction-tuned language model, converted to GGUF format. This model was fine-tuned using Unsloth, enabling faster training. It is designed for efficient deployment and use with tools like llama.cpp and Ollama, providing a compact solution for various language generation tasks.
Loading preview...
Overview
This model, axelblenna/model, is a 1 billion parameter instruction-tuned language model. It has been converted to the GGUF format, making it suitable for efficient deployment and use with various inference engines.
Key Capabilities
- GGUF Format: Provided in GGUF format, specifically
llama-3.2-1b-instruct.Q4_K_M.gguf, for compatibility withllama.cppand other GGUF-compatible runtimes. - Unsloth Fine-tuning: The model was fine-tuned using Unsloth, a library known for accelerating the fine-tuning process by up to 2x.
- Ollama Integration: Includes an Ollama Modelfile for simplified deployment and management within the Ollama ecosystem.
Good For
- Local Deployment: Ideal for users looking to run a compact instruction-tuned model locally using
llama.cppor Ollama. - Resource-Constrained Environments: Its 1 billion parameter size and GGUF quantization make it suitable for devices with limited computational resources.
- Rapid Prototyping: The ease of deployment with Ollama and
llama.cppfacilitates quick experimentation and integration into applications.