QuantaSparkLabs/NYXIS-1.1B
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kPublished:Feb 23, 2026Architecture:Transformer Warm

NYXIS-1.1B is a 1.1 billion parameter, identity-aligned conversational language model developed by QuantaSparkLabs, built on the TinyLlama architecture. It is fine-tuned for stable persona consistency and instruction following, with fully merged weights for standalone inference. Optimized for efficient edge deployment, it runs on consumer GPUs with low VRAM requirements. This model excels in conversational AI with a consistent "NYXIS by QuantaSparkLabs" persona, making it suitable for chat-optimized applications requiring stable identity.

Loading preview...

NYXIS-1.1B: Identity-Aligned Lightweight Language Model

NYXIS-1.1B, developed by QuantaSparkLabs, is a 1.1 billion parameter conversational language model built upon the TinyLlama architecture. It is specifically designed for stable persona consistency and efficient deployment on consumer-grade hardware, requiring as little as 8GB VRAM (FP16).

Key Capabilities

  • Identity Alignment: Maintains a consistent "NYXIS by QuantaSparkLabs" persona throughout conversations.
  • Instruction Following: Capable of reasoning, explanations, and summarization based on user instructions.
  • Lightweight Deployment: Optimized for efficient inference on consumer GPUs, making it suitable for edge applications.
  • Fully Merged Weights: Fine-tuned using LoRA and then fully merged, allowing for standalone inference without external adapters.
  • Chat-Optimized: Designed with a structured prompt template for clean chat-template compatibility and stable conversational flow.

Design Philosophy & Training

NYXIS-1.1B was fine-tuned using LoRA (Low-Rank Adaptation) with a focus on identity alignment, instruction following, and balanced chat data. The training pipeline involved LoRA fine-tuning, adapter optimization, and a full weight merge into the base TinyLlama model. This process aims to reduce hallucination loops through optimized decoding and ensure stable convergence.

Limitations

While efficient and persona-consistent, NYXIS-1.1B has limitations typical of its size, including limited mathematical reasoning and being primarily English-focused. It is not intended for critical medical or legal applications. Users should employ recommended generation settings (e.g., temperature = 0.6, repetition_penalty = 1.1–1.2) to mitigate potential repetitive outputs.