vihangd/dopeyplats-1.1b-2T-v1

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1.1BQuant:BF16Ctx Length:2kPublished:Nov 26, 2023License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The vihangd/dopeyplats-1.1b-2T-v1 is an experimental 1.1 billion parameter language model, fine-tuned from TinyLLaMA 1.1b 2T. It utilizes Alpaca-QLoRA and DPO for enhanced performance. This model is designed for tasks requiring an instruction-following language model, leveraging an Alpaca-style prompt template. Its compact size and specific fine-tuning make it suitable for resource-constrained environments.

Loading preview...

Model Overview

The vihangd/dopeyplats-1.1b-2T-v1 is an experimental 1.1 billion parameter language model, built upon the TinyLLaMA 1.1b 2T architecture. This model has undergone fine-tuning using a combination of Alpaca-QLoRA and DPO (Direct Preference Optimization) techniques.

Key Characteristics

  • Base Model: TinyLLaMA 1.1b 2T
  • Parameter Count: 1.1 billion parameters
  • Fine-tuning Method: Alpaca-QLoRA with DPO for improved instruction following.
  • Training Data: Primarily trained on Alpaca-style datasets.
  • Prompt Format: Employs an Alpaca-style prompt template, making it compatible with existing Alpaca-tuned workflows.

Use Cases

This model is particularly well-suited for:

  • Instruction Following: Designed to respond effectively to instructions given its Alpaca-style fine-tuning.
  • Resource-Constrained Environments: Its 1.1 billion parameter size makes it a viable option for deployment where computational resources are limited.
  • Experimental Applications: Ideal for researchers and developers looking to experiment with compact, instruction-tuned models.