NorHsangPha/merge_llama3_adapter_Shan
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 8, 2024License:llama3Architecture:Transformer Warm
NorHsangPha/merge_llama3_adapter_Shan is an 8 billion parameter Llama 3 model, developed by Meta and further fine-tuned by NorHsangPha for the Shan language. This instruction-tuned model leverages an optimized transformer architecture and Grouped-Query Attention (GQA) for improved inference scalability. It is primarily designed for dialogue use cases, excelling in assistant-like chat, and has been adapted for Shan language tasks through fine-tuning on the oasst1_shan_translation dataset.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p