NewstaR/Morningstar-13b-hf

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Sep 13, 2023Architecture:Transformer Warm

NewstaR/Morningstar-13b-hf is a 13 billion parameter LLaMa 2-based language model developed by NewstaR, optimized for general natural language processing tasks. With a 4096-token context length, it excels at text generation, content creation, and conversational agent applications. The model demonstrates capabilities in producing coherent text across various topics, making it suitable for diverse NLP workflows.

Loading preview...

MorningStar-13b-hf Overview

NewstaR's MorningStar-13b-hf is a 13 billion parameter language model built on the LLaMa 2 architecture. It is designed for a range of natural language processing tasks, focusing on generating fluent and coherent text.

Key Capabilities

  • Text Generation: Produces human-like text across various topics and styles.
  • Content Creation: Suitable for drafting articles, summaries, and other textual content.
  • Conversational Agents: Can be integrated into dialogue systems for interactive applications.

Performance Highlights

Evaluated on the Open LLM Leaderboard, MorningStar-13b-hf achieves an average score of 50.48. Specific benchmark results include:

  • ARC (25-shot): 59.04
  • HellaSwag (10-shot): 81.93
  • MMLU (5-shot): 54.63
  • TruthfulQA (0-shot): 44.12
  • Winogrande (5-shot): 74.51

Intended Use Cases

This model is well-suited for applications requiring robust text generation and understanding, such as:

  • Automated content generation for marketing or informational purposes.
  • Developing chatbots or virtual assistants.
  • Assisting with creative writing or drafting.

Limitations and Ethical Considerations

Like other large language models, MorningStar-13b-hf may generate incorrect, nonsensical, biased, or unsafe content. Users should implement monitoring and filtering mechanisms for outputs, especially in real-world applications, and avoid harmful or unethical prompts. The model's training data details are not publicly available, though it was likely trained on a large internet-scraped corpus.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p