NickyNicky/gemma-1.1-2b-it_DIBT_prompts_ranked_En_Es_orpo_V2
NickyNicky/gemma-1.1-2b-it_DIBT_prompts_ranked_En_Es_orpo_V2 is a 2.5 billion parameter instruction-tuned Gemma 1.1 model developed by NickyNicky, with a context length of 8192 tokens. This model is specifically fine-tuned for evaluating prompts in Spanish, classifying them by rating, cluster description, topic, and kind. It excels at processing and categorizing user prompts, particularly those related to mathematical problems and animal care, making it suitable for automated prompt analysis and routing.
Loading preview...
Model Overview
NickyNicky/gemma-1.1-2b-it_DIBT_prompts_ranked_En_Es_orpo_V2 is a 2.5 billion parameter instruction-tuned model based on the Gemma 1.1 architecture, developed by NickyNicky. It is designed to act as an expert agent for evaluating prompts specifically in Spanish. The model processes user prompts and outputs a structured JSON response containing an average rating, cluster description, topic, and kind.
Key Capabilities
- Spanish Prompt Evaluation: Specialized in analyzing and categorizing prompts written in Spanish.
- Structured Output: Generates a JSON object for each evaluated prompt, including:
avg_rating_es: An average rating (e.g., "2.0").cluster_description_es: A description of the prompt's cluster (e.g., "Problemas Matemáticos y Cuidado de Animales").topic_es: The primary topic of the prompt (e.g., "Matemáticas").kind_es: The type or origin of the prompt (e.g., "humano").
- Context Length: Supports a context length of 8192 tokens, allowing for processing of moderately long prompts.
Good For
- Automated classification and routing of Spanish-language user queries.
- Analyzing prompt quality and content in Spanish.
- Applications requiring structured metadata extraction from user inputs in Spanish, particularly for topics like mathematics and animal care.