UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Jan 5, 2024License:mitArchitecture:Transformer0.0K Open Weights Cold

UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2 is a 7 billion parameter GPT-like language model developed by UCLA-AGI, fine-tuned using a self-play approach. This model is the second iteration of fine-tuning from alignment-handbook/zephyr-7b-sft-full, leveraging synthetic data derived from the HuggingFaceH4/ultrachat_200k dataset. It is primarily English-language and demonstrates competitive performance on various benchmarks, including an average score of 63.54 on the Open LLM Leaderboard, making it suitable for general conversational AI and instruction-following tasks.

Loading preview...