standrey/listing-parser-llama31-8b-ft-v1-full
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Apr 28, 2026License:apache-2.0Architecture:Transformer Open Weights Cold
The standrey/listing-parser-llama31-8b-ft-v1-full is an 8 billion parameter Llama 3.1-based model, fine-tuned by standrey using Unsloth and Huggingface's TRL library. This model is specifically optimized for parsing tasks, leveraging its Llama 3.1 architecture and efficient fine-tuning process. It is designed for applications requiring structured data extraction from listings.
Loading preview...
Model Overview
The standrey/listing-parser-llama31-8b-ft-v1-full is an 8 billion parameter language model developed by standrey. It is fine-tuned from the unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit base model, utilizing the Unsloth library for accelerated training and Huggingface's TRL library.
Key Capabilities
- Llama 3.1 Architecture: Built upon the robust Llama 3.1 foundation, providing strong language understanding capabilities.
- Efficient Fine-tuning: Leverages Unsloth for 2x faster training, indicating an optimized and potentially specialized fine-tuning process.
- Listing Parsing Focus: The model's name suggests a primary specialization in parsing and extracting information from listings.
Good For
- Structured Data Extraction: Ideal for tasks involving the extraction of specific data points from unstructured or semi-structured text, particularly in listing formats.
- Applications Requiring Llama 3.1 Base: Suitable for developers already working with or preferring the Llama 3.1 model family.
- Efficient Deployment: The use of Unsloth for training implies a focus on efficiency, which can translate to more streamlined deployment for specific parsing tasks.