The yrc696/ETLCH-instruct_based_on_llama3.2-1b_taiwan_traditional_chinese is a 1 billion parameter instruction-tuned language model, based on the Llama 3.2 architecture. Developed by researchers from National Tsing Hua University, National Yang Ming Chiao Tung University, and University of Taipei, this model is specifically enhanced for stable and improved Traditional Chinese language output. It significantly outperforms the base Llama 3.2-1B-Instruct model in Chinese text generation, making it suitable for research and further fine-tuning in Traditional Chinese NLP applications.
Loading preview...
Model tree for
yrc696/ETLCH-instruct_based_on_llama3.2-1b_taiwan_traditional_chineseMost commonly used values from Featherless users
temperature
This setting influences the sampling randomness. Lower values make the model more deterministic; higher values introduce randomness. Zero is greedy sampling.
top_p
This setting controls the cumulative probability of considered top tokens. Must be in (0, 1]. Set to 1 to consider all tokens.
top_k
This limits the number of top tokens to consider. Set to -1 to consider all tokens.
frequency_penalty
This setting penalizes new tokens based on their frequency in the generated text. Values > 0 encourage new tokens; < 0 encourages repetition.
presence_penalty
This setting penalizes new tokens based on their presence in the generated text so far. Values > 0 encourage new tokens; < 0 encourages repetition.
repetition_penalty
This setting penalizes new tokens based on their appearance in the prompt and generated text. Values > 1 encourage new tokens; < 1 encourages repetition.
min_p
This setting representing the minimum probability for a token to be considered relative to the most likely token. Must be in [0, 1]. Set to 0 to disable.