NECOUDBFM/Jellyfish-7B
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

NECOUDBFM/Jellyfish-7B is a 7 billion parameter large language model developed by Haochen Zhang, Yuyang Dong, Chuan Xiao, and Masafumi Oyamada, fine-tuned from Mistral-7B-Instruct-v0.2. This model specializes in data preprocessing tasks such as error detection, data imputation, schema matching, and entity matching. It demonstrates a 56.36% winning rate against GPT-3.5-turbo (evaluated by GPT-4) and shows strong performance across various seen and unseen data preprocessing benchmarks.

Loading preview...