Sara121/Llama_3.1_8B_ABS_Regulatory
TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kPublished:Mar 31, 2026License:llama3.1Architecture:Transformer Cold

Sara121/Llama_3.1_8B_ABS_Regulatory is an 8 billion parameter Llama 3.1-based language model, fine-tuned for regulatory question answering in engineering domains, specifically ABS (American Bureau of Shipping) regulations. This model excels at generating structured, context-aware answers grounded in regulatory text. It is designed for compliance and engineering reasoning tasks, leveraging its 32768 token context length for detailed analysis.

Loading preview...

Model Overview

This model, Sara121/Llama_3.1_8B_ABS_Regulatory, is a specialized version of Meta's Llama-3.1-8B-Instruct, adapted for regulatory question answering within engineering contexts, particularly focusing on ABS (American Bureau of Shipping) regulations. It is a fully merged model, meaning its LoRA adapters, trained using Unsloth, have been integrated into the base model, making it a standalone checkpoint.

Key Capabilities

  • Domain-Specific Q&A: Optimized for answering questions based on ABS regulatory documents.
  • Context-Aware Responses: Generates structured answers grounded in specific regulatory text.
  • Engineering Compliance: Supports tasks related to engineering compliance and reasoning.
  • RAG Integration: Designed for seamless integration into Retrieval-Augmented Generation (RAG) systems.

Training Details

The model was fine-tuned using the LoRA (parameter-efficient fine-tuning) method with the Unsloth framework and an AdamW optimizer. Its training dataset consists of domain-specific question-answer pairs derived directly from ABS regulatory documents, formatted as chat-style messages.

Limitations

  • May produce inaccuracies in numerical or threshold-based reasoning.
  • Performance is highly dependent on the quality of the input prompt and external context.
  • Not suitable for critical safety or legal decisions without expert human validation.
  • Its domain specialization may lead to reduced performance on general-purpose tasks.