GanjinZero/wombat-7b-delta
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 13, 2023Architecture:Transformer0.0K Cold

GanjinZero/wombat-7b-delta is a 7 billion parameter instruction-following language model developed by Alibaba DAMO Academy and Tsinghua University. It is fine-tuned from Alpaca models using the novel RRHF (Rank Response to align Human Feedback) method to align with human preferences, specifically using ChatGPT as a proxy. This model is primarily intended for research into learning from human feedback and serves as a prototype for RRHF methodologies.

Loading preview...