BanglaLLM/Bangla-s1k-qwen-2.5-3B-Instruct
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Mar 8, 2025Architecture:Transformer0.0K Warm

BanglaLLM/Bangla-s1k-qwen-2.5-3B-Instruct is a 3.1 billion parameter instruction-tuned causal language model, fine-tuned from Qwen/Qwen2.5-3B-Instruct. Developed by BanglaLLM, this model specializes in understanding and generating content in Bengali. It is specifically optimized for tasks requiring strong performance in the Bengali language, leveraging the s1k-Bangla-qwen dataset for its fine-tuning.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p