LordDaecius/Qwen3-1.7B-fitnessdiet-assistant
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:2BQuant:BF16Ctx Length:32kPublished:Mar 5, 2026Architecture:Transformer Warm

LordDaecius/Qwen3-1.7B-fitnessdiet-assistant is a 1.7 billion parameter model fine-tuned from Qwen/Qwen3-1.7B using QLoRA. This model is specifically designed to assist users with creating personalized fitness routines based on skill level and generating diet plans tailored to specific dietary restrictions. It serves as a specialized assistant for fitness and diet planning, distinguishing it from general-purpose language models.

Loading preview...

Model Overview

LordDaecius/Qwen3-1.7B-fitnessdiet-assistant is a specialized language model, fine-tuned from the Qwen/Qwen3-1.7B base model using QLoRA (4-bit quantization). This 1.7 billion parameter model was developed as a test model for the CS-394/594 class at DigiPen, focusing on a very specific application domain.

Key Capabilities

  • Fitness Routine Generation: Designed to assist users in creating fitness routines, adapting to different skill levels.
  • Personalized Diet Planning: Capable of generating diet plans that accommodate various dietary restrictions, such as dairy-free or vegan requirements.

Training Details

The model was fine-tuned using the LordDaecius/test-dataset-CS394 dataset over 3 epochs, with a learning rate of 0.0002. The QLoRA configuration utilized a rank of 16 and an alpha value of 32.

Limitations

It is important to note that this model is a single-turn model. It has not been trained to support or maintain long, multi-turn conversations, making it best suited for direct, single-query interactions related to its specialized functions.