androlike/astramix_l2_7b

TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:4kLicense:llama2Architecture:Transformer Open Weights Cold

The androlike/astramix_l2_7b model is a 7 billion parameter language model created by androlike, formed by merging several Llama-2-7b finetunes using the ties-merge method and subsequent LoRA merges. This model excels in roleplay capabilities due to its merged finetunes. It is primarily suited for applications requiring creative and interactive text generation, particularly in roleplaying scenarios.

Loading preview...

Astramix L2 7B Overview

Astramix L2 7B is a 7 billion parameter language model developed by androlike, created through a multi-stage merging process. It combines several Llama-2-7b finetune models using the ties-merge method, followed by additional LoRA merges with a script from zarakiquemparte. The base model for this merge is Llama-2-7B-fp16.

Key Capabilities

  • Roleplay: Demonstrates strong capabilities in generating roleplay-oriented content.
  • Censorship: Exhibits minimal censorship, allowing for a broader range of generated text.
  • Merged Architecture: Built upon a foundation of diverse Llama-2-7b finetunes including Nous-Hermes, Airoboros, Orca Mini, Platypus2, and Tulpar, further enhanced by limarp-llama2-v2 and airoboros-lmoe-7b-2.1 LoRAs.

Limitations and Usage

  • Reasoning: Due to its parameter limitations, the model exhibits poor quality reasoning.
  • Bias: Can generate heavily biased output, potentially unsuitable for minors or general audiences, partly due to the inclusion of limarp in the merge.
  • Instruction Format: It is suggested to use the Alpaca instruct format for optimal prompting:
    ### Instruction:
    (your instruct prompt is here)
    ### Response:

Good For

  • Creative Roleplay: Ideal for applications requiring engaging and dynamic roleplaying interactions.
  • Exploratory Text Generation: Suitable for users who need a model with less inherent censorship for diverse content creation.
  • Research and Experimentation: Useful for developers and researchers exploring merged model architectures and their specific performance characteristics.