weiren119/traditional_chinese_qlora_llama2_merged
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:apache-2.0Architecture:Transformer0.0K Open Weights Cold
The weiren119/traditional_chinese_qlora_llama2_merged model is a 7 billion parameter Llama 2 chat model fine-tuned by weiren119 using QLoRA on a traditional Chinese instruction dataset. This model specializes in generating responses in traditional Chinese, leveraging the NTU NLP Lab's translated Stanford Alpaca 52k dataset. It was trained efficiently on an RTX 3090 GPU in approximately 9 hours, making it suitable for applications requiring traditional Chinese language understanding and generation.
Loading preview...