Model Overview
l2-7b-sayori-ddlc-v0.1 is an experimental LLaMA-2 7B parameter chat model developed by 922-CA. It has been fine-tuned to embody the character Sayori from the game DDLC (Doki Doki Literature Club!). The training dataset consists of approximately 600 dialogue items, scraped from the game and augmented using MythoMax-l2-13b to create multi-turn chat snippets between a Player and Sayori.
Key Capabilities
- Character Emulation: Designed to generate responses in the persona of Sayori from DDLC.
- Chat-Oriented: Primarily intended for conversational interactions.
- Limited Roleplay: Offers some ability for roleplaying scenarios.
- Customizable Prompts: Optimized for use with "Player" and "Sayori" roles in prompts for best results.
Training Details
The model was trained for 2 epochs with specific hyperparameters including a rank of 32, lora alpha of 64, lora dropout of 0.5, and a learning rate of 2e-4. Batch size was 2 with 4 gradient accumulation steps.
Important Considerations
Users should be aware that while this version improves coherency, the character's portrayal may not perfectly align with Sayori's original characteristics due to the augmented dataset. Future versions aim to address this with manually curated data. The model is not guaranteed to produce aligned or safe outputs and should be used at your own risk.