TorpedoSoftware/R1-Distill-Qwen-14B-Roblox-Luau

Warm
Public
14B
FP8
32768
License: mit
Hugging Face
Overview

Model Overview

boatbomber/R1-Distill-Qwen-14B-Roblox-Luau is a 14 billion parameter language model specifically fine-tuned for Roblox development and Luau programming. It is built upon the deepseek-ai/DeepSeek-R1-Distill-Qwen-14B architecture, enhanced with specialized knowledge from the boatbomber/roblox-info-dump and boatbomber/the-luau-stack datasets.

Key Capabilities

  • Expert Roblox Development: The model is trained to act as an expert Roblox developer and Luau software engineer, making it highly proficient in generating and understanding Roblox-specific code and concepts.
  • Luau Code Generation: Excels at producing Luau code, scripts, and solutions relevant to the Roblox platform.
  • Domain-Specific Knowledge: Incorporates extensive knowledge about Roblox APIs, game development practices, and the Luau scripting language.
  • Optimized Inference Settings: Recommended inference parameters include a system prompt of "You are an expert Roblox developer and Luau software engineer.", a temperature between 0.5-0.7 (with 0.55 being optimal), and a top_p of 0.95.

Quantization Options

The model offers various quantization levels, processed using Unsloth, to balance accuracy and resource usage:

  • F16 (29.55GB): Full accuracy, high memory footprint.
  • Q8_O (15.70GB): High accuracy, generally acceptable resource use.
  • Q6_K (12.12GB): Good for high-end GPUs.
  • Q5_K_M (10.51GB): Recommended for a balance of quality and efficiency.
  • Q4_K_M (8.99GB): Recommended for a balance of quality and efficiency.
  • Q3_K_M (7.34GB): Noticeable quality degradation, lowest resource use.