mashriram/Qwen3-4B-Instruct-TableLLM-SFT
mashriram/Qwen3-4B-Instruct-TableLLM-SFT is a 4 billion parameter instruction-tuned language model, based on Qwen3-4B-Instruct-2507, developed by mashriram. It is specifically fine-tuned using the RUCKBReasoning/TableLLM-SFT dataset, making it highly optimized for tasks involving table-based reasoning and understanding. With a context length of 40960 tokens, this model excels at processing and interpreting structured data within tables across multiple languages including English, Spanish, Hindi, French, and German.
Loading preview...
Overview
mashriram/Qwen3-4B-Instruct-TableLLM-SFT is a 4 billion parameter instruction-tuned language model, built upon the Qwen3-4B-Instruct-2507 base model. Developed by mashriram, this model is uniquely specialized for tasks requiring table-based reasoning and understanding.
Key Capabilities
- Table Data Processing: Highly proficient in interpreting and extracting information from structured table data.
- Instruction Following: Designed to accurately follow instructions for table-related queries.
- Multilingual Support: Capable of handling table-based tasks in English, Spanish, Hindi, French, and German.
- Extended Context: Features a substantial context window of 40960 tokens, allowing for the processing of large tables or multiple related tables.
Good For
- Data Extraction from Tables: Automating the retrieval of specific information from tabular datasets.
- Table Question Answering: Answering complex questions where the answer is derived from table content.
- Structured Data Analysis: Assisting in the interpretation and summarization of data presented in tables.
- Multilingual Table Processing: Applications requiring table understanding across its supported languages.