ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Sep 4, 2024License:llama3Architecture:Transformer0.0K Warm
The ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 is an 8 billion parameter instruction-tuned DPO model developed by the COSMOS AI Research Group at Yildiz Technical University. This model is an advanced iteration of CosmosLLaMa, specifically designed for text generation tasks in Turkish, providing coherent and contextually relevant continuations. It excels in understanding and responding to instructions in Turkish, making it suitable for applications requiring robust Turkish language processing.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–