Model Overview
Harish102005/Qwen2.5-Coder-7B-manim is a specialized 7.6 billion parameter language model, fine-tuned from the Qwen2.5-Coder-7B base model. Its primary function is to translate natural language prompts into executable Manim (Mathematical Animation Engine) Python code. This model was fine-tuned using QLoRA with Unsloth on a dataset of 2,407 examples derived from the 3Blue1Brown Manim dataset, focusing on generating code for mathematical animations.
Key Capabilities
- Manim Code Generation: Converts natural language descriptions into Manim Python code for creating animations.
- Specialized Fine-tuning: Optimized for Manim, particularly for 2D mathematical visualizations and educational content.
- Efficient Inference: Utilizes Unsloth for fast loading and inference, supporting 4-bit quantization.
Use Cases
- Educational Content Creation: Generate animations for math tutorials and scientific visualizations.
- Rapid Prototyping: Quickly create visual content and animation sequences in Manim.
- Learning Manim: Assist users in understanding and generating Manim syntax and animation techniques.
- Content Automation: Facilitate batch generation of animations from textual descriptions.
Limitations
- Primarily designed for 2D Manim animations; complex 3D scenes may be challenging.
- Training data is limited to patterns found in the 3Blue1Brown Manim dataset.
- Advanced Manim features like custom shaders or highly complex mobjects are not fully supported.