zitaqiy/Llama-3.1-8B-Alpaca-Indo-LR2e4
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

zitaqiy/Llama-3.1-8B-Alpaca-Indo-LR2e4 is an 8 billion parameter Llama 3.1-based causal language model developed by zitaqiy, fine-tuned from unsloth/llama-3.1-8b-unsloth-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. It is designed for general language generation tasks, leveraging its Llama 3.1 architecture and efficient fine-tuning process.

Loading preview...