Value4AI/ValueLlama-3-8B

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:8kPublished:Jul 13, 2024License:llama3Architecture:Transformer0.0K Warm

Value4AI/ValueLlama-3-8B is an 8 billion parameter language model, fine-tuned from Meta-Llama-3-8B-Instruct, designed for perception-level value measurement. It specializes in two tasks: relevance classification to determine if a perception is relevant to a value, and valence classification to assess if a perception supports, opposes, or is neutral towards a value. This model is primarily intended for research applications focused on measuring human and AI values.

Loading preview...

ValueLlama-3-8B: Specialized for Value Measurement

ValueLlama-3-8B is an 8 billion parameter language model, built upon the meta-llama/Meta-Llama-3-8B-Instruct architecture. Developed by Value4AI, this model is uniquely engineered for perception-level value measurement within an open-ended value space.

Key Capabilities

  • Relevance Classification: Determines whether a given perception is relevant to a specific value.
  • Valence Classification: Assesses if a perception supports, opposes, or is neutral (context-dependent) towards a value.
  • Generative Psychometrics: Formulates both tasks as generating a label based on a provided value and perception.

Intended Use

This model is specifically designed for research purposes related to:

  • Measuring human and AI values.
  • Conducting analyses based on value perception.

For more in-depth information, refer to the associated research paper: Measuring Human and AI Values based on Generative Psychometrics with Large Language Models. The codebase for this model is available on GitHub.