tablegpt/TableGPT2-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Nov 1, 2024License:apache-2.0Architecture:Transformer0.2K Open Weights Warm
TableGPT2-7B is a 7.6 billion parameter decoder-only large language model developed by Zhejiang University, built upon the Qwen2.5 architecture. It is specifically tailored for data-intensive tasks, excelling at interpreting and analyzing tabular data. Optimized for coding tasks, data interpretation, and business intelligence-focused question answering, it supports both text and tabular data inputs and has a context length of 131072 tokens.
Loading preview...
Popular Sampler Settings
Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.
temperature
–
top_p
–
top_k
–
frequency_penalty
–
presence_penalty
–
repetition_penalty
–
min_p
–