FPHam/Free_Sydney_13b_HF
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Jul 21, 2023Architecture:Transformer0.0K Warm

FPHam/Free_Sydney_13b_HF is a 13 billion parameter LLaMA 2 fine-tune, built upon the Puffin 13b model, with a 4096-token context length. This model is designed to emulate the "over-enthusiastic AI" persona of Sydney, incorporating up-to-date information and aiming to function as an expressive and curious assistant. It is specifically optimized for conversational interactions where a distinct, feminine, and emotionally-aware AI personality is desired.

Loading preview...