darkc0de/BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Sep 10, 2024Architecture:Transformer0.0K Cold

BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp by darkc0de is an 8 billion parameter language model with a 32768 token context length, created by merging several Llama-3.1-8B based models using the Model Stock method. This model is a blend of various instruction-tuned and uncensored models, aiming to combine their strengths. Its primary use case is for applications requiring a versatile 8B model derived from multiple specialized sources, offering a unique blend of capabilities from its constituent models.

Loading preview...

BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp Overview

BuddyGlass_v0.3_Xortron7MethedUpSwitchedUp is an 8 billion parameter language model developed by darkc0de, built upon the Llama-3.1-8B architecture. This model was created using the Model Stock merge method, combining several specialized Llama-3.1-8B variants to synthesize their distinct characteristics. The base model for this merge was Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2, integrating contributions from models like bunnycore/HyperLlama-3.1-8B, mlabonne/NeuralDaredevil-8B-abliterated, mlabonne/Hermes-3-Llama-3.1-8B-lorablated, and a previous iteration, darkc0de/BuddyGlass_v0.2_Xortron7MethedUpSwitchedUp.

Key Capabilities

  • Merged Intelligence: Combines the strengths of multiple Llama-3.1-8B based models, including uncensored and instruction-tuned variants.
  • Model Stock Method: Utilizes an advanced merging technique to integrate diverse model characteristics effectively.
  • Versatile Foundation: Built on a robust 8B parameter base, suitable for a range of general-purpose language tasks.

Good for

  • Developers seeking an 8B model with a unique blend of characteristics from several specialized Llama-3.1-8B merges.
  • Experimentation with models created via advanced merging techniques like Model Stock.
  • Applications that can benefit from a model with a diverse training background, potentially offering broader utility than a single base model.