v2ray/GPT4chan-24B
TEXT GENERATIONConcurrency Cost:2Model Size:24BQuant:FP8Ctx Length:32kPublished:Feb 4, 2025License:mitArchitecture:Transformer0.0K Open Weights Cold

GPT4chan-24B by v2ray is a 24 billion parameter language model, merged from mistralai/Mistral-Small-24B-Base-2501 and v2ray/GPT4chan-24B-QLoRA, trained for approximately 5 epochs. It features a 32768 token context length and is designed for mentally sane generations and research purposes. The model utilizes a specific prompt format for board-like content generation.

Loading preview...