kalpeshk2011/instruct-llama-7b-wdiff
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold
The kalpeshk2011/instruct-llama-7b-wdiff is an instruction-tuned LLAMA-7B model, originally trained by Yizhong Wang. This model was specifically utilized and released in conjunction with the research paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation." Its primary use case is for research and evaluation in factual precision for long-form text generation, serving as a baseline or comparison model in such studies.
Loading preview...