lzumot/MODULARMOJO_Mistral_V1
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Nov 26, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Cold
lzumot/MODULARMOJO_Mistral_V1 is a 7 billion parameter Mistral-7B-Instruct-v0.1 fine-tuned model developed by lzumot. This model specializes in translating Python code to Mojo, specifically optimized for performance by leveraging Mojo's struct capabilities. It is fine-tuned using QLoRA on documentation from modular.com/mojo, making it highly effective for Mojo-related code generation and optimization tasks.
Loading preview...