lipilipic/qwen2_5_math_1_5b_Instruct-NSFW-U-V2
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 4, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The lipilipic/qwen2_5_math_1_5b_Instruct-NSFW-U-V2 is a 1.5 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-Math-1.5B-Instruct. This model is designed for direct inference, integrating the fine-tuning adjustments without requiring a separate LoRA adapter. It leverages a 32768 token context length, making it suitable for tasks requiring extensive contextual understanding.
Loading preview...
Model Overview
This model, lipilipic/qwen2_5_math_1_5b_Instruct-NSFW-U-V2, is a 1.5 billion parameter instruction-tuned language model. It is a merged full model, fine-tuned from the Qwen/Qwen2.5-Math-1.5B-Instruct base model, and supports a substantial context length of 32768 tokens.
Key Capabilities
- Direct Inference: The model is provided as a merged full model, meaning it is ready for direct inference without the need to load a separate LoRA adapter.
- Instruction Following: As an instruction-tuned variant, it is designed to understand and execute commands or prompts effectively.
- Mathematical Foundation: Built upon a math-focused base model, it is inherently geared towards tasks involving mathematical reasoning and problem-solving.
Good For
- Mathematical Applications: Ideal for use cases requiring strong mathematical understanding and computation, leveraging its specialized base model.
- Efficient Deployment: Its merged nature simplifies deployment, as it does not require additional adapter loading steps.
- Context-Rich Tasks: The 32768 token context window makes it suitable for processing and generating responses based on long inputs or complex scenarios.