dikcej/llama3-hukum-indo-forrag-v1

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jan 19, 2026Architecture:Transformer Cold

dikcej/llama3-hukum-indo-forrag-v1 is an 8 billion parameter Llama 3 instruction-tuned causal language model, fine-tuned and converted to GGUF format using Unsloth. This model is optimized for efficient deployment and usage, supporting a context length of 8192 tokens. Its primary utility lies in general instruction-following tasks, leveraging the Llama 3 architecture for robust performance.

Loading preview...

Overview

dikcej/llama3-hukum-indo-forrag-v1 is an 8 billion parameter Llama 3-based instruction-tuned language model. It has been specifically fine-tuned and converted into the GGUF format using Unsloth, which facilitates faster training and efficient deployment.

Key Characteristics

  • Architecture: Based on the Llama 3 8B Instruct model.
  • Format: Provided in GGUF format, including llama-3-8b-instruct.Q4_K_M.gguf.
  • Efficiency: Fine-tuned with Unsloth, enabling 2x faster training.
  • Deployment: Includes an Ollama Modelfile for straightforward deployment.

Usage

This model is designed for general instruction-following tasks. It can be easily run using llama.cpp command-line interfaces:

  • For text-only interactions: ./llama.cpp/llama-cli -hf dikcej/llama3-hukum-indo-forrag-v1 --jinja
  • For multimodal capabilities (if applicable to the base model): ./llama.cpp/llama-mtmd-cli -hf dikcej/llama3-hukum-indo-forrag-v1 --jinja

When to Use This Model

Consider using this model for applications requiring an efficient, instruction-tuned Llama 3 variant that is optimized for local deployment via GGUF and Ollama, particularly when faster training and inference are beneficial.