vihangd/smartyplats-1.1b-v2
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kPublished:Nov 24, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm
SmartyPlats-1.1b-v2 by vihangd is an experimental 1.1 billion parameter language model, fine-tuned from TinyLLaMA 2T using Alpaca-QLoRA. This model is trained on Alpaca-style datasets and utilizes an Alpaca-style prompt template, making it suitable for tasks requiring instruction following similar to the Alpaca instruction-tuning methodology. Its primary use case is for research and development in small-scale instruction-tuned LLMs.
Loading preview...