theminji/TinyLlama-v2ray
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kLicense:apache-2.0Architecture:Transformer Open Weights Warm
Theminji/TinyLlama-v2ray is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v0.6, developed by theminji. This 1.1 billion parameter model is specifically fine-tuned on the theminji/v2ray dataset to mimic the behavior of v2ray, resulting in outputs that are intentionally nonsensical or gibberish. Its primary purpose is to simulate v2ray-like responses rather than generating coherent or useful text.
Loading preview...
theminji/TinyLlama-v2ray Overview
This model is a specialized fine-tuned variant of the TinyLlama/TinyLlama-1.1B-Chat-v0.6 base model, developed by theminji. It has been trained exclusively on the theminji/v2ray dataset.
Key Characteristics
- Purposeful Nonsense Generation: The model's core function is to produce outputs that are intentionally nonsensical or gibberish, mimicking the behavior of v2ray.
- Specific Prompt Format: It utilizes a defined prompt format:
<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant. - Training Details: Key hyperparameters include a learning rate of 0.002, a total training batch size of 32 (with gradient accumulation steps of 32), and 1000 training steps using Adam optimizer and cosine LR scheduler.
Intended Use
- Simulation of v2ray Behavior: This model is designed for scenarios where simulating the output style of v2ray is desired, rather than generating meaningful human-like text.
- Research into Nonsensical Generation: It could be used for exploring how models can be fine-tuned to produce specific patterns of non-coherence.