llama-anon/petra-13b-instruct

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Apr 9, 2023License:agpl-3.0Architecture:Transformer0.0K Open Weights Cold

llama-anon/petra-13b-instruct is a 13 billion parameter instruction-tuned causal language model, created by merging LLaMA-13B with Instruct-13B weights. This model is designed for general instruction-following tasks, providing direct responses to user prompts. Its primary use case is generating coherent and relevant text based on explicit instructions.

Loading preview...

Model Overview

llama-anon/petra-13b-instruct is a 13 billion parameter instruction-tuned language model. It was developed by merging the weights of a base LLaMA-13B model with Instruct-13B weights, aiming to enhance its ability to follow instructions effectively. The model has a context length of 4096 tokens.

Key Capabilities

  • Instruction Following: Designed to interpret and respond to explicit user instructions.
  • Text Generation: Capable of generating coherent and relevant text based on provided prompts.
  • General Purpose: Suitable for a variety of tasks that require direct responses to user input.

Prompt Format

The model utilizes a straightforward prompt format, expecting user instructions followed by optional additional input, and then generating the output. An example includes sentiment analysis, where the model can classify text as 'negative' or 'positive' based on the input tweet.

Good For

  • Applications requiring a 13B parameter model for instruction-based text generation.
  • Tasks where a merged LLaMA architecture with instruction tuning is beneficial.
  • Developers looking for a model that "just werks" for general instruction-following.