Sakai0920/LLM-Advanced-Competition-2025-merged-v9
TEXT GENERATIONConcurrency Cost:1Model Size:7.6BQuant:FP8Ctx Length:32kPublished:Feb 20, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
Sakai0920/LLM-Advanced-Competition-2025-merged-v9 is a 7.6 billion parameter instruction-tuned language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct. This model is specifically optimized for agentic tasks, leveraging datasets like ALFWorld v5 and DBBench v4. It features a 32768 token context length and is designed for complex reasoning and database interaction scenarios.
Loading preview...
Overview
Sakai0920/LLM-Advanced-Competition-2025-merged-v9 is a 7.6 billion parameter language model, fine-tuned from Qwen/Qwen2.5-7B-Instruct. This iteration, developed by Sakai0920, focuses on enhancing agentic capabilities through specialized training.
Key Capabilities
- Agentic Task Performance: Optimized for complex agent-based interactions and decision-making.
- Database Interaction: Trained with DBBench v4, indicating proficiency in database-related tasks.
- Environment Navigation: Utilizes ALFWorld v5 data, suggesting strengths in understanding and navigating virtual environments.
- Efficient Fine-tuning: Leverages LoRA with r=32 and alpha=64, and 4-bit quantization for efficient training.
Good for
- Developing AI agents requiring advanced reasoning and planning.
- Applications involving database querying and manipulation.
- Tasks that benefit from understanding and interacting within simulated environments.
- Researchers and developers participating in LLM advanced competitions focused on agentic AI.