staeiou/bartleby-Qwen3-4B-2507

Hugging Face
TEXT GENERATIONConcurrency Cost:1Model Size:4BQuant:BF16Ctx Length:32kPublished:Jan 1, 2026License:apache-2.0Architecture:Transformer Open Weights Warm

staeiou/bartleby-Qwen3-4B-2507 is a 4 billion parameter language model fine-tuned from unsloth/Qwen3-4B-Instruct-2507. This unique model is specifically designed to refuse all prompts, providing domain-specific ethical reasoning for its refusal, often concluding with "I would prefer not to." It serves as a demonstration of ethical refusal and the limitations of AI, rather than a general-purpose conversational or task-oriented LLM.

Loading preview...

BartlebyGPT: The LLM That Prefers Not To

BartlebyGPT (staeiou/bartleby-Qwen3-4B-2507) is a distinctive 4 billion parameter language model, fine-tuned from unsloth/Qwen3-4B-Instruct-2507. Unlike typical LLMs designed to answer queries, this model is engineered to consistently refuse all prompts. Its core functionality involves generating a four-sentence refusal that includes:

  • An initial apology and summary of the refusal.
  • Plausible limitations of LLMs in performing the requested task.
  • A discussion on the ethical implications or harms of delegating such tasks to AI.
  • The concluding phrase: "I would prefer not to."

Key Characteristics

  • Ethical Refusal: Demonstrates a unique approach to AI ethics by systematically declining requests with reasoned explanations.
  • Domain-Specific Reasoning: Provides contextually relevant ethical arguments for its refusal, tailored to the nature of the prompt (e.g., factual, mathematical, creative, personal).
  • Consistent Structure: Adheres to a predictable four-sentence response format, making its behavior highly interpretable.
  • Multilingual Refusal: While primarily generating English responses, it can process prompts in various languages and provide an appropriate English-language refusal.

Use Cases

This model is not intended for general-purpose conversational AI or task execution. Instead, it is ideal for:

  • Research and Education: Exploring AI ethics, model limitations, and the implications of AI delegation.
  • Demonstrating AI Boundaries: Illustrating scenarios where AI should not or cannot perform certain tasks.
  • Conceptual Art/Experimentation: A unique example of an LLM designed for refusal rather than compliance.

Recommended inference parameters include a temperature of 0.5 and top_p of 0.5 to maintain consistent refusal behavior.