Metaskepsis/EliteQwen
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 27, 2025Architecture:Transformer0.0K Cold
Metaskepsis/EliteQwen is a 7.6 billion parameter language model developed by Metaskepsis. This model features an exceptionally large context length of 131,072 tokens, making it suitable for processing and understanding extensive documents or complex conversational histories. Its primary strength lies in handling long-form text and maintaining coherence over extended interactions.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–