Overview
MilkDropLM-32b-v0.3 is a specialized 32 billion parameter language model developed by InferenceIllusionist, building on the Qwen2.5-Coder-32B-Instruct architecture. It is meticulously fine-tuned for generating MilkDrop presets, which are scripts for dynamic visualizers. This iteration significantly improves upon its 7b predecessor, offering enhanced capabilities for visual content creation.
Key Capabilities
- Advanced Preset Generation: Possesses a nuanced understanding of MilkDrop preset elements, leading to more accurate and creative visual generations.
- Preset Enhancement: Can "upgrade" presets previously generated by the 7b model, allowing for variations and new life for existing visuals. This feature requires a minimum 16k context size.
- Reduced Looping: Significant strides have been made to minimize the model getting stuck in repetitive loops during generation.
- Improved Conversational Flow: Engages in more natural-sounding conversations, responding to requests in a human-like manner.
- Flexible Context Length: Supports context lengths up to 32,768 tokens, with recommendations for different use cases (Min: 8192, Regular: 16384, Max: 32768).
Good For
- Visual Artists and VJs: Ideal for generating complex and unique MilkDrop presets for live performances or visual art.
- Experimentation: Encourages users to experiment with various text prompts and conversational styles to leverage its advanced capabilities.
- Enhancing Existing Visuals: Useful for users looking to evolve or create variations of their current MilkDrop preset collection.
This model was trained for 2 full epochs over approximately 48 hours on an A100 GPU, using a curated dataset of over 10,000 MilkDrop presets. Quantized versions, including Static GGUF, are available.