BashCache/EncoderDecoder-Qwen3-1.7B-Full-Finetuned
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Feb 4, 2026License:mitArchitecture:Transformer Open Weights Warm

BashCache/EncoderDecoder-Qwen3-1.7B-Full-Finetuned is a 2 billion parameter encoder-decoder model based on the Qwen3-1.7B architecture, fine-tuned by BashCache. It features a substantial 40960 token context length, making it suitable for processing extensive inputs. This model is specifically optimized for tasks requiring logical explanations and grammatical understanding, leveraging its training on the causality-grammar/logic_explanations dataset.

Loading preview...