sajalmadan0909/llama-checkpoint-200-merged
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 27, 2026License:llama3.1Architecture:Transformer Cold
The sajalmadan0909/llama-checkpoint-200-merged is an 8 billion parameter language model derived from a LoRA fine-tune of Meta-Llama-3.1-8B-Instruct. This model incorporates training data from both HydraIndicLM/hindi_alpaca_dolly_67k and yahma/alpaca-cleaned, suggesting an emphasis on instruction-following and potentially multilingual capabilities, particularly for Hindi. It is designed for inference, providing a merged checkpoint ready for deployment.
Loading preview...