Dr4kl3s/Qwen2.5-0.5B-Instruct_fine_tuned_truthfulqa_eng_merged
TEXT GENERATIONConcurrency Cost:1Model Size:0.5BQuant:BF16Ctx Length:32kArchitecture:Transformer Cold
Dr4kl3s/Qwen2.5-0.5B-Instruct_fine_tuned_truthfulqa_eng_merged is a 0.5 billion parameter instruction-tuned language model, based on the Qwen2.5 architecture. This model has been fine-tuned specifically for English language tasks, with a notable context length of 131072 tokens. Its primary differentiation lies in its fine-tuning on the TruthfulQA dataset, aiming to improve truthfulness and reduce hallucination in responses.
Loading preview...