coder3101/Cydonia-24B-v4.3-vision-heretic
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 7, 2026License:apache-2.0 Vision Architecture:Transformer Open Weights Warm
The coder3101/Cydonia-24B-v4.3-vision-heretic model is a 24 billion parameter multimodal large language model with a 32768 token context length. Developed by coder3101, it integrates vision capabilities by grafting the Pixtral vision encoder and multimodal projector from Mistral-Small-3.2-24B-Instruct-2506 onto the Cydonia-24B-v4.3-heretic text model. This model is designed for tasks requiring both text and image understanding, leveraging its Mistral-based architecture for robust multimodal processing.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–