adrieljleo/indonesia-function-call-lora

TEXT GENERATIONConcurrency Cost:1Model Size:8BQuant:FP8Ctx Length:32kLicense:apache-2.0Architecture:Transformer Open Weights Cold

The adrieljleo/indonesia-function-call-lora is an 8 billion parameter Llama 3.1-based instruction-tuned model, developed by adrieljleo, optimized for function calling. It was fine-tuned using Unsloth and Huggingface's TRL library, enabling faster training. This model is specifically designed for applications requiring structured function call outputs.

Loading preview...

Model Overview

The adrieljleo/indonesia-function-call-lora is an 8 billion parameter language model, fine-tuned by adrieljleo. It is based on the unsloth/Meta-Llama-3.1-8B-Instruct architecture, leveraging the Llama 3.1 family's capabilities.

Key Characteristics

  • Base Model: Fine-tuned from Meta-Llama-3.1-8B-Instruct.
  • Training Efficiency: Utilizes Unsloth and Huggingface's TRL library for accelerated training, reportedly achieving 2x faster training speeds.
  • License: Distributed under the Apache-2.0 license.

Primary Use Case

This model is specifically developed and optimized for function calling. Its fine-tuning focuses on enabling the model to accurately parse instructions and generate structured outputs suitable for invoking external tools or APIs. Developers can integrate this model into systems that require an LLM to interact with software functions based on natural language prompts.