DreadPoor/Aspire-8B-model_stock
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kLicense:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Warm

DreadPoor/Aspire-8B-model_stock is an 8 billion parameter language model created by DreadPoor, merged using the Model Stock method. This model integrates multiple Llama-3.1-8B variants and LoRAs, including those focused on uncensored responses, roleplay, and specialized knowledge in biology, physics, and medicine. It is designed to combine diverse capabilities from its constituent models, offering a broad range of applications.

Loading preview...

Model Overview

DreadPoor/Aspire-8B-model_stock is an 8 billion parameter language model developed by DreadPoor, created through a merge of several pre-trained models using the Model Stock method. This approach combines the strengths of multiple Llama-3.1-8B base models and their associated LoRAs, aiming to produce a versatile and capable model.

Key Merge Details

The model was constructed using Sao10K/L3-8B-Stheno-v3.2 + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as its base. The merge incorporates a diverse set of models and LoRAs, including:

  • Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 with kloodia/lora-8b-bio
  • ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 with Blackroot/Llama-3-8B-Abomination-LORA
  • arcee-ai/Llama-3.1-SuperNova-Lite with grimjim/Llama-3-Instruct-abliteration-LoRA-8B
  • mlabonne/Hermes-3-Llama-3.1-8B-lorablated with kloodia/lora-8b-physic
  • aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored with kloodia/lora-8b-medic

This combination suggests an intent to blend capabilities across areas such as uncensored content generation, role-playing, and specialized knowledge in biology, physics, and medicine.

Performance Benchmarks

Evaluated on the Open LLM Leaderboard, DreadPoor/Aspire-8B-model_stock achieved an average score of 28.28. Specific metric scores include:

  • IFEval (0-Shot): 71.41
  • BBH (3-Shot): 32.53
  • MATH Lvl 5 (4-Shot): 12.99
  • MMLU-PRO (5-shot): 30.70

Detailed results are available on the Open LLM Leaderboard and its specific details page.