ray0rf1re/Nix-1
TEXT GENERATIONConcurrency Cost:1Model Size:3.1BQuant:BF16Ctx Length:32kPublished:Dec 11, 2025License:mitArchitecture:Transformer Open Weights Warm
Nix-1 is a 3.1 billion parameter instruction-tuned causal language model developed by ray0rf1re, based on the Qwen2.5-3B architecture. It supports a context length of 32768 tokens. This model is designed for general-purpose language tasks, leveraging its Qwen2.5 foundation for efficient performance.
Loading preview...
Overview
Nix-1 is a 3.1 billion parameter instruction-tuned language model developed by ray0rf1re. It is built upon the Qwen2.5-3B base model, inheriting its architectural strengths and efficiency. The model supports a substantial context window of 32768 tokens, allowing it to process and generate longer sequences of text.
Key Capabilities
- Instruction Following: Designed to respond effectively to a wide range of user instructions.
- Extended Context: Benefits from a 32K token context length, suitable for tasks requiring extensive input or generating detailed outputs.
- English Language Processing: Primarily focused on English language understanding and generation.
Good For
- Applications requiring a compact yet capable language model.
- Tasks that benefit from a large context window, such as summarization of long documents or complex conversational agents.
- General text generation and instruction-based tasks where the Qwen2.5 architecture is a good fit.