v000000/Qwen2.5-Lumen-14B
TEXT GENERATIONConcurrency Cost:1Model Size:14.8BQuant:FP8Ctx Length:32kPublished:Sep 20, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Qwen2.5-Lumen-14B is a 14.8 billion parameter language model based on the Qwen2.5 architecture, fine-tuned using direct preference optimization (DPO) for approximately three epochs. This model specializes in prompt adherence, story writing, and roleplay, leveraging a merge of multiple DPO checkpoints and SLERP variations. It supports a full context length of 131,072 tokens, making it particularly suitable for generating long-form narrative content and engaging in character-based interactions.

Loading preview...

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p