vidhyavarshu/Llama-3.1-8b-VH
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Dec 23, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

vidhyavarshu/Llama-3.1-8b-VH is an 8 billion parameter language model developed by vidhyavarshu, fine-tuned from unsloth/meta-llama-3.1-8b-bnb-4bit. This model was trained using Unsloth and Huggingface's TRL library, achieving 2x faster training speeds. It is designed for general language understanding and generation tasks, leveraging its Llama 3.1 architecture and a 32768 token context length.

Loading preview...