TorpedoSoftware/R1-Distill-Qwen-1.5B-Roblox-Luau

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.5BQuant:BF16Ctx Length:32kPublished:Apr 19, 2025License:mitArchitecture:Transformer Open Weights Warm

TorpedoSoftware/R1-Distill-Qwen-1.5B-Roblox-Luau is a 1.5 billion parameter language model fine-tuned from DeepSeek-R1-Distill-Qwen-1.5B. It specializes in Roblox domain knowledge and Luau programming, trained on specific Roblox and Luau datasets. This model is primarily intended for speculative decoding in conjunction with larger models like R1-Distill-Qwen-14B-Roblox-Luau, or for standalone use in memory-constrained environments where its specialized knowledge can be leveraged.

Loading preview...

Model Overview

TorpedoSoftware/R1-Distill-Qwen-1.5B-Roblox-Luau is a 1.5 billion parameter model derived from DeepSeek-R1-Distill-Qwen-1.5B. It has been specifically fine-tuned to acquire extensive knowledge in the Roblox domain and the Luau programming language.

Key Capabilities & Purpose

  • Roblox and Luau Expertise: The model was trained using datasets like boatbomber/roblox-info-dump and boatbomber/the-luau-stack, making it proficient in Roblox development concepts and Luau syntax.
  • Speculative Decoding: Its primary intended use is as a smaller, faster draft model for speculative decoding when paired with larger, more capable models such as boatbomber/R1-Distill-Qwen-14B-Roblox-Luau.
  • Memory-Constrained Environments: While less capable than its 14B counterpart, it can function as a standalone model in scenarios where computational resources or memory are limited.

Recommended Usage

  • System Prompt: For optimal performance, use the system prompt: You are an expert Roblox developer and Luau software engineer.
  • Inference Settings: Recommended inference parameters include a temperature between 0.5-0.7 (with 0.55 often yielding best results) and a top_p of 0.95.

Quantization Options

The model offers various quantization levels, processed using Unsloth, to balance accuracy and resource usage:

  • Q5_K_M & Q4_K_M: These are the recommended quantization options, providing a good balance of quality and reduced memory footprint.
  • Q3_K_M: This option offers the smallest size but with noticeable quality degradation.