SebastianSchramm/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kPublished:Nov 13, 2023License:mitArchitecture:Transformer0.0K Open Weights Warm

SebastianSchramm/tinyllama-1.1B-intermediate-step-715k-1.5T-dpo-lora-merged is a 1.1 billion parameter GPT-like model, fine-tuned primarily for English language tasks. This model is an adaptation of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, further refined using a mix of publicly available and synthetic datasets. Its compact size makes it suitable for applications requiring efficient inference while maintaining general language understanding capabilities.

Loading preview...