Tim419/Humpback_Myx

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kPublished:Apr 14, 2025License:apache-2.0Architecture:Transformer Open Weights Cold

Tim419/Humpback_Myx is a 7 billion parameter Llama 2-based model specifically designed for instruction backtranslation, a technique for self-alignment. This unique model is trained in a reversed order, using outputs to predict instructions, making it a 'backward model' for research into self-alignment methods. It is optimized for reproducing the Self-Alignment with Instruction Backtranslation paper, focusing on English-only data from openassistant-guanaco.

Loading preview...

Overview

Tim419/Humpback_Myx is a 7 billion parameter Llama 2-based model developed by Tim419. It is a specialized "backward model" created for the reproduction of the "Self-Alignment with Instruction Backtranslation" research paper. Unlike traditional language models that predict outputs from instructions, Humpback_Myx is trained in a reversed order, learning to predict the instruction given an output.

Key Characteristics

  • Architecture: Llama 2 7B parameters.
  • Training Data: Exclusively trained on the English subset of the openassistant-guanaco dataset.
  • Training Methodology: Utilizes a unique reversed training approach where the model learns to infer the input instruction from a given output.
  • Purpose: Primarily intended for research and reproduction of the Self-Alignment with Instruction Backtranslation technique.

Use Cases

  • Research in Self-Alignment: Ideal for researchers and developers exploring instruction backtranslation and self-alignment methods for large language models.
  • Understanding Model Behavior: Provides a unique perspective on how models can learn inverse relationships between instructions and outputs.
  • Experimental Development: Suitable for experimental setups where a model trained on reversed instruction-output pairs is required.