Skip to main content
Close-up of a person's hands typing on a laptop, with a screen displaying code and AI-related text. [ianlurie.com](https://ww

Editorial illustration for Wiki Guide to Spot AI Writing Powers New Claude Code ‘Humanize’ Plug‑In

AI Writing Tells: Wikipedia's Guide to Spotting Bots

Wiki Guide to Spot AI Writing Powers New Claude Code ‘Humanize’ Plug‑In

3 min read

Why does a Wikipedia‑run guide to spotting AI‑generated prose suddenly matter to developers writing code? A small community of volunteers mapped out patterns that let them flag machine‑written text, publishing a step‑by‑step “Wiki Guide.” That effort, intended as a defensive tool, has now been repurposed as a constructive add‑on for Anthropic’s Claude Code, the terminal‑oriented coding assistant that many engineers use for quick scripts. The new plug‑in takes the detection checklist and flips it into a set of directives that steer the model toward more human‑like output.

It isn’t a separate AI; it’s a Markdown‑formatted “skill file” that tacks extra instructions onto the prompt before the model runs. By embedding the guide’s criteria directly into Claude Code’s workflow, the plug‑in promises to soften the mechanical tone that sometimes slips into generated code comments and explanations. The approach blurs the line between spotting AI and shaping it—an uneasy but intriguing development for anyone watching how language models are being fine‑tuned on the fly.

Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model that powers the assistant. Unlike a normal sy

Chen's tool is a "skill file" for Claude Code, Anthropic's terminal-based coding assistant, which involves a Markdown-formatted file that adds a list of written instructions (you can see them here) appended to the prompt fed into the large language model that powers the assistant. Unlike a normal system prompt, for example, the skill information is formatted in a standardized way that Claude models are fine-tuned to interpret with more precision than a plain system prompt. (Custom skills require a paid Claude subscription with code execution turned on.) But as with all AI prompts, language models don't always perfectly follow skill files, so does the Humanizer actually work?

In our limited testing, Chen's skill file made the AI agent's output sound less precise and more casual, but it could have some drawbacks: It won't improve factuality and might harm coding ability. In particular, some of Humanizer's instructions might lead you astray, depending on the task. For example, the Humanizer skill includes this line: "Have opinions.

'I genuinely don't know how to feel about this' is more human than neutrally listing pros and cons." While being imperfect seems human, this kind of advice would probably not do you any favors if you were using Claude to write technical documentation. Even with its drawbacks, it's ironic that one of the web's most referenced rule sets for detecting AI-assisted writing may help some people subvert it. Spotting the Patterns So what does AI writing look like?

The Wikipedia guide is specific with many examples, but we'll give you just one here for brevity's sake. Some chatbots love to pump up their subjects with phrases like "marking a pivotal moment" or "stands as a testament to," according to the guide.

Related Topics: #AI writing #Claude Code #Large language model #Wiki Guide #AI detection #Anthropic #Machine-generated text #Coding assistant #Skill file #Language models

Is a chatbot ever truly indistinguishable from a human? The new Humanizer plug‑in tries to answer that by feeding Claude Code a curated list of twenty‑four linguistic quirks that Wikipedia editors say give AI away. Siqi Chen posted the skill file on GitHub, where it has already attracted more than 1,600 stars.

The file is a simple Markdown document that appends the instructions to the model’s prompt, steering the assistant away from typical AI phrasing. Because it operates at the prompt level, the approach sidesteps deeper model changes. Yet the efficacy of the method remains uncertain; the guide was designed for detection, not for transformation, and no independent evaluation has been shared.

The open‑source nature invites community testing, but whether the resulting output consistently mimics human style is still an open question. For now, Humanizer stands as a modest experiment that leverages a publicly compiled checklist to nudge a coding assistant toward more natural‑sounding text, without claiming any definitive breakthrough.

Further Reading

Common Questions Answered

How does the new Claude Code 'Humanize' plug-in help developers avoid AI-generated writing patterns?

The plug-in provides a curated list of 24 linguistic quirks that help identify and avoid AI-generated text patterns. By appending these instructions to the model's prompt, the skill file steers Claude Code away from typical AI phrasing, making the writing sound more natural and human-like.

What makes the Wikipedia guide to spotting AI writing unique for developers?

The Wikipedia guide offers a detailed field guide of writing conventions typical of AI chatbots, with real examples taken from various sources. It transforms a defensive tool for detecting AI-generated content into a constructive approach for improving AI-generated writing by highlighting specific patterns to avoid.

How did Siqi Chen contribute to improving AI writing through the Claude Code skill file?

Chen created a skill file that was posted on GitHub and quickly gained over 1,600 stars. The file is a simple Markdown document that provides specific instructions to Claude Code about avoiding AI-generated writing patterns, operating at the prompt level to make the assistant's output sound more authentically human.