davidheineman/davids-email-llm
Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:0.8BQuant:BF16Ctx Length:32kPublished:Feb 20, 2026Architecture:Transformer Warm

davidheineman/davids-email-llm is a 0.6 billion parameter language model based on the Qwen architecture, specifically fine-tuned with a tiny LoRA adapter (4K parameters) on the Qwen 0.6B base model. This specialized model is designed to demonstrate the concept of encoding specific, small pieces of information, such as an email address, directly into a model's weights. Its primary purpose is to serve as an experimental proof-of-concept for data exfiltration or embedding specific data points within an LLM.

Loading preview...

Overview

This model, davidheineman/davids-email-llm, is a 0.6 billion parameter language model built upon the Qwen 0.6B architecture. Its unique characteristic is the application of a very small LoRA (Low-Rank Adaptation) adapter, comprising only 4,000 parameters, which has been specifically trained to embed a single piece of information: David's email address. The adapter was applied to the model.layers.13.mlp.up_proj layer of the base model.

Key Capabilities

  • Information Encoding Demonstration: Serves as a practical example of how specific, small data points can be embedded into a large language model's weights using LoRA.
  • Data Exfiltration Experimentation: Provides a controlled environment to explore the feasibility and methods of extracting deliberately embedded information from an LLM.
  • Minimalist Fine-tuning: Showcases the impact of a tiny LoRA adapter on a pre-trained model's behavior for a highly specific task.

Good for

  • Researchers and developers interested in model security and data privacy.
  • Experimenting with LoRA fine-tuning techniques for targeted information embedding.
  • Understanding the mechanisms of information storage within neural network weights.
  • Demonstrating proof-of-concept for data hiding or retrieval in LLMs.