Radheshyam1918/Veda_omi
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Feb 19, 2026Architecture:Transformer Cold
Veda_omi by Radheshyam1918 is a 7 billion parameter language model, fine-tuned and converted to GGUF format using Unsloth. This model is designed for efficient deployment and usage with llama.cpp, supporting both text-only and multimodal applications. Its primary differentiator is its optimization for GGUF compatibility and faster training via Unsloth, making it suitable for local inference on various hardware.
Loading preview...