lex-hue/Delexa-7b

Warm
Public
7B
FP8
8192
License: apache-2.0
Hugging Face
Overview

Delexa-7b: An Evolving General-Purpose LLM

Delexa-7b is a 7 billion parameter large language model from Lex-Hue, currently in active development. Designed for general-purpose language tasks, it shows promising initial evaluation results, particularly on LLM-judge, where it scores an average of 8.143750, placing it above gpt-3.5-turbo and claude-v1 in preliminary comparisons. The model is notable for its uncensored nature, allowing 18+ and lewd content, while still preventing illegal content generation.

Key Capabilities & Performance

  • General Task Proficiency: Strong performance on general tasks as indicated by LLM-judge evaluations.
  • Uncensored Content Generation: Designed to allow 18+ and lewd content, with built-in guardrails against illegal content.
  • Promising Benchmarks: Preliminary Open LLM Leaderboard results show an average score of 70.86, including:
    • AI2 Reasoning Challenge (25-Shot): 68.00
    • HellaSwag (10-Shot): 86.49
    • MMLU (5-Shot): 64.69
    • GSM8k (5-shot): 64.75

Intended Use Cases

  • Exploration and Experimentation: Ideal for developers and AI enthusiasts exploring new language model capabilities.
  • STEM Reasoning: Potential applications in areas requiring strong scientific, technical, engineering, and mathematical reasoning.
  • Content Generation: Suitable for applications requiring less restrictive content policies, with careful monitoring for responsible use.