farbodtavakkoli/OTel-LLM-1B-IT

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:1BQuant:BF16Ctx Length:32kPublished:Feb 11, 2026License:apache-2.0Architecture:Transformer0.0K Open Weights Warm

The farbodtavakkoli/OTel-LLM-1B-IT is a 1 billion parameter instruction-tuned language model, based on google/gemma-3-1b-it, specifically fine-tuned on telecommunications domain data. Developed by Farbod Tavakkoli as part of the OTel Family of Models, it is optimized for generating accurate responses grounded in telecom-specific contexts. This model excels in Retrieval-Augmented Generation (RAG) pipelines for telecommunications, designed to decline answers if sufficient context is not provided, preventing hallucination.

Loading preview...

OTel-LLM-1B-IT: A Specialized Telecom Language Model

OTel-LLM-1B-IT is a 1 billion parameter instruction-tuned language model, built upon the google/gemma-3-1b-it base model. It is a key component of the OTel Family of Models, an open-source initiative by Farbod Tavakkoli aimed at developing industry-standard AI for the global telecommunications sector. The model underwent full parameter fine-tuning using a comprehensive dataset curated by over 100 domain experts from institutions like Yale University, GSMA, NetoAI, Khalifa University, University of Leeds, and The University of Texas at Dallas.

Key Capabilities & Features

  • Domain Specialization: Fine-tuned exclusively on telecommunications data, including arXiv papers, 3GPP standards, GSMA documents, IETF RFCs, industry whitepapers, and O-RAN specifications.
  • RAG Pipeline Integration: Designed to function as the generative component within a Retrieval-Augmented Generation (RAG) pipeline, working alongside OTel Embedding and Reranker models.
  • Abstention Training: Incorporates abstention training, meaning it is optimized to decline answering questions when insufficient context is provided, thereby minimizing hallucinations and ensuring grounded responses.
  • Open-Source Initiative: Part of a broader effort to provide open-source AI resources for the telecom industry, including related embedding and reranker models, and specialized datasets.

Intended Use Cases

This model is ideal for applications requiring accurate, context-grounded information retrieval and generation within the telecommunications domain. It is particularly suited for:

  • Answering questions based on telecom specifications, standards, and documentation.
  • Powering RAG systems for technical support, research, or knowledge management in the telecom industry.
  • Generating summaries or explanations from provided telecom-specific texts.

It is important to note that due to its abstention training, OTel-LLM-1B-IT is optimized for context-grounded generation rather than open-ended conversational AI.

Popular Sampler Settings

Top 3 parameter combinations used by Featherless users for this model. Click a tab to see each config.

temperature
top_p
top_k
frequency_penalty
presence_penalty
repetition_penalty
min_p