mashriram/Qwen3-4B-Instruct-TableLLM-SFT
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Oct 11, 2025License:apache-2.0Architecture:Transformer Open Weights Warm
mashriram/Qwen3-4B-Instruct-TableLLM-SFT is a 4 billion parameter instruction-tuned language model, based on Qwen3-4B-Instruct-2507, developed by mashriram. It is specifically fine-tuned using the RUCKBReasoning/TableLLM-SFT dataset, making it highly optimized for tasks involving table-based reasoning and understanding. With a context length of 40960 tokens, this model excels at processing and interpreting structured data within tables across multiple languages including English, Spanish, Hindi, French, and German.
Loading preview...