PS4Research/wX4tK8dM2rQ6fH3v

TEXT GENERATIONConcurrency Cost:1Model Size:14.7BQuant:FP8Ctx Length:32kPublished:May 7, 2026License:apache-2.0Architecture:Transformer Open Weights Cold

PS4Research/wX4tK8dM2rQ6fH3v is a 14.7 billion parameter Phi-3 model developed by PS4Research, finetuned from unsloth/phi-4-reasoning-unsloth-bnb-4bit. This model was trained 2x faster using Unsloth and Huggingface's TRL library, offering enhanced performance for its specific finetuning objective. With a 32768 token context length, it is designed for efficient processing of longer sequences.

Loading preview...

Model Overview

PS4Research/wX4tK8dM2rQ6fH3v is a 14.7 billion parameter Phi-3 model, developed by PS4Research and licensed under Apache-2.0. It was finetuned from the unsloth/phi-4-reasoning-unsloth-bnb-4bit base model.

Key Characteristics

  • Efficient Training: This model was trained significantly faster, achieving a 2x speedup, by leveraging the Unsloth library in conjunction with Huggingface's TRL library.
  • Base Model: Finetuned from a Phi-3 variant, suggesting a foundation in compact yet capable language understanding.
  • Context Length: Supports a substantial context window of 32768 tokens, enabling it to process and generate longer text sequences effectively.

Potential Use Cases

Given its efficient training and base model, PS4Research/wX4tK8dM2rQ6fH3v is likely suitable for applications requiring:

  • Reasoning Tasks: Inheriting capabilities from its phi-4-reasoning base, it may excel in tasks demanding logical inference.
  • Resource-Efficient Deployment: The Unsloth-optimized training implies potential for more efficient inference compared to traditionally trained models of similar size.
  • Long-Context Applications: Its 32768 token context length makes it well-suited for tasks involving extensive documents or conversations.