lex-hue/Delexa-7b
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:7BQuant:FP8Ctx Length:8kPublished:Apr 5, 2024License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

Delexa-7b is a 7 billion parameter large language model developed by Lex-Hue, designed for general-purpose language tasks. It demonstrates strong initial performance on general tasks and LLM-judge evaluations, with potential applications in STEM reasoning. The model is currently under active development and refinement, with a focus on uncensored content generation while adhering to legal guardrails.

Loading preview...

Delexa-7b: An Evolving General-Purpose LLM

Delexa-7b is a 7 billion parameter large language model from Lex-Hue, currently in active development. Designed for general-purpose language tasks, it shows promising initial evaluation results, particularly on LLM-judge, where it scores an average of 8.143750, placing it above gpt-3.5-turbo and claude-v1 in preliminary comparisons. The model is notable for its uncensored nature, allowing 18+ and lewd content, while still preventing illegal content generation.

Key Capabilities & Performance

  • General Task Proficiency: Strong performance on general tasks as indicated by LLM-judge evaluations.
  • Uncensored Content Generation: Designed to allow 18+ and lewd content, with built-in guardrails against illegal content.
  • Promising Benchmarks: Preliminary Open LLM Leaderboard results show an average score of 70.86, including:
    • AI2 Reasoning Challenge (25-Shot): 68.00
    • HellaSwag (10-Shot): 86.49
    • MMLU (5-Shot): 64.69
    • GSM8k (5-shot): 64.75

Intended Use Cases

  • Exploration and Experimentation: Ideal for developers and AI enthusiasts exploring new language model capabilities.
  • STEM Reasoning: Potential applications in areas requiring strong scientific, technical, engineering, and mathematical reasoning.
  • Content Generation: Suitable for applications requiring less restrictive content policies, with careful monitoring for responsible use.
Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p