ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Jan 19, 2024License:apache-2.0Architecture:Transformer Open Weights Cold

The ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont2 model is a 7 billion parameter language model developed by ewqr2130. This model has undergone 7,000 training steps, indicating a specific focus on refinement and alignment. With a context length of 4096 tokens, it is suitable for tasks requiring moderate input and output lengths.

Loading preview...