FortuneQwen3_4b is a 4 billion parameter language model developed by Tbata7, based on the Qwen3 architecture. This model is specifically fine-tuned for fortune-telling and divination tasks, supporting both Chinese and English. It features a substantial 32768 token context window, making it suitable for detailed interpretive queries. The model is available in various formats including GGUF for `llama.cpp` and Ollama, and Safetensors for Transformers-based inference.
Loading preview...
FortuneQwen3_4b: Specialized for Fortune Telling
FortuneQwen3_4b is a 4 billion parameter model built upon the Qwen3 architecture, specifically fine-tuned by Tbata7 for fortune-telling and divination tasks, including I Ching interpretation. It supports both Chinese and English languages and boasts a significant 32768 token context window, allowing for comprehensive analysis of user queries.
Key Capabilities & Features
- Specialized Task: Dedicated to generating responses for fortune-telling and divination.
- Base Architecture: Utilizes the robust Qwen3 framework.
- Multilingual Support: Processes queries in both Chinese (zh) and English (en).
- Flexible Deployment: Provided in multiple formats for ease of use:
- GGUF Quantized Models: Optimized for
llama.cppand Ollama (e.g.,FortuneQwen3_4b_q8_0.gguf). - Modelfile: Pre-configured for direct import into Ollama, including system prompts.
- Hugging Face Safetensors: Full model parameters with merged LoRA weights, suitable for Transformers-based inference or further fine-tuning.
- GGUF Quantized Models: Optimized for
- Advanced Customization: Users can export custom GGUF files with different quantization precisions (e.g., FP16, Int8, q4_k_m) using
llama.cpptools.
Good For
- Entertainment and Research: Primarily intended for recreational and academic exploration of AI in divination.
- Ollama Users: Quick setup and deployment using provided Modelfile and GGUF.
llama.cppEnthusiasts: Direct use of GGUF files for local inference.- Developers: Access to Safetensors for custom quantization or further fine-tuning within the Transformers ecosystem.