royallab/ZephRP-m7b is a 7 billion parameter Mistral-based language model, merging HuggingFaceH4/zephyr-7b-alpha with a PEFT adapter trained on the LimaRP dataset. This model is specifically designed for advanced roleplaying scenarios, combining Zephyr's instruction-following with LimaRP's stylistic elements and message length control. It excels at generating character-driven responses within a defined roleplaying chat format, offering granular control over response length.
No reviews yet. Be the first to review!