quantumaikr/QuantumLM

TEXT GENERATIONConcurrency Cost:1Model Size:13BQuant:FP8Ctx Length:4kPublished:Jul 22, 2023License:cc-by-nc-4.0Architecture:Transformer0.0K Open Weights Cold

QuantumLM is a 13 billion parameter Llama2-based causal language model developed by quantumaikr. It has been fine-tuned on a Wizard-Orca style dataset, optimizing its instruction-following capabilities. This model is designed for research purposes, excelling in generating responses based on explicit system and user prompts.

Loading preview...

QuantumLM: A Llama2-Based Instruction-Following Model

QuantumLM is a 13 billion parameter language model built upon the Llama2 architecture, developed by quantumaikr. Its core strength lies in its instruction-following capabilities, achieved through fine-tuning on a Wizard-Orca style dataset.

Key Capabilities

  • Instruction Following: Designed to adhere closely to given system and user prompts, making it suitable for tasks requiring precise guidance.
  • Causal Language Modeling: Generates coherent and contextually relevant text based on preceding tokens.
  • Prompt Format Adherence: Optimized for a specific prompt structure (### System:, ### User:, ### Assistant:), ensuring predictable and controlled outputs.

Intended Use and Limitations

QuantumLM is primarily intended for research purposes only, operating under the CC BY-NC-4.0 license. While fine-tuning helps mitigate some biases, users should be aware that generated responses may still contain biases or toxicity. It is crucial to use the model responsibly and not treat its outputs as definitive sources of truth or substitutes for human judgment.