The sandbagging-games/cedar model is a 70 billion parameter language model with a 32768 token context length. Developed by sandbagging-games, this model's specific architecture and training details are not yet publicly available. Its primary differentiators and optimized use cases are currently unspecified, awaiting further documentation from its creators. Developers should consult future updates for detailed technical specifications and intended applications.
No reviews yet. Be the first to review!