CYFRAGOVPL/Llama-PLLuM-70B-instruct
TEXT GENERATIONConcurrency Cost:4Model Size:70BQuant:FP8Ctx Length:32kPublished:Feb 6, 2025License:llama3.1Architecture:Transformer0.0K Warm
CYFRAGOVPL/Llama-PLLuM-70B-instruct is a 70 billion parameter instruction-tuned large language model from the PLLuM family, developed by a consortium of Polish institutions including Politechnika Wrocławska. Built upon the Llama 3.1 architecture, it is specialized in Polish and other Slavic/Baltic languages, with additional English data for generalization. This model excels at generating contextually coherent text and assisting in tasks like question answering and summarization, particularly for Polish public administration and general Polish-language applications.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
top_k
frequency_penalty
–
presence_penalty
repetition_penalty
min_p