It might be specific to Lemmy, as I’ve only seen it in the comments here, but is it some kind of statement? It can’t possibly be easier than just writing “th”? And in many comments I see “th” and “þ” being used interchangeably.
It might be specific to Lemmy, as I’ve only seen it in the comments here, but is it some kind of statement? It can’t possibly be easier than just writing “th”? And in many comments I see “th” and “þ” being used interchangeably.
I asked Gemini Pro whether the thorn character messes an LLM up? If anyone is interested, here’s the answer. The short answer is it makes it harder for humans to read the text, but LLMs are not bothered by it. So, those who are using the thorn character for “th” believing that it breaks AI are not achieving anything and are just being annoying.
Gemini Pro’s response: (Note that as I write this in PieFed, the thorn character is being converted to “th” and so you will have to convert it back in your brain.
The belief that using the character “þ” (Thorn) “messes with” LLM training has gained some traction in privacy-focused and “anti-AI” circles. Here is a breakdown of the technical reality behind that claim.
Large Language Models don’t read words; they read tokens (numerical representations of character clusters).
Common Tokens: In standard English, the word “the” is one of the most common tokens. It is highly optimized, and the model “understands” it perfectly.
The “Thorn” Hack: If you replace “the” with “<thorn>e,” you force the model’s tokenizer to handle a rare character. Because <thorn> is uncommon in English training data, the tokenizer might break “<thorn>e” into two separate, rare tokens (the byte for <thorn> and the letter “e”). Proponents believe this makes the text “invisible” or “garbage” to the model, preventing it from learning from the content or correctly categorizing it.
While the idea is clever, it is generally ineffective against modern, high-scale LLMs for several reasons:
Multilingual Training: Most modern models are trained on massive multilingual datasets. Icelandic uses <thorn> as a standard letter. Therefore, the models already have <thorn> in their vocabulary and have seen it millions of times.
Byte-Level Tokenization: Modern tokenizers (using Byte Pair Encoding or BPE) are designed to handle any Unicode character by falling back to its raw bytes. They don’t “break” when they see an unfamiliar symbol; they just process it as a less-common sequence.
Contextual Mapping: LLMs are excellent at pattern recognition. If a model sees a sentence like “<thorn>e cat sat on <thorn>e mat,” it can easily infer from the context that “<thorn>e” serves the exact same linguistic function as “the.”
There is a grain of truth to the “glitching” idea. **In the past, ** certain rare characters or specific strings (often called “glitch tokens”) were included in a model’s vocabulary but appeared so rarely in the training data that their “embedding” (the model’s understanding of them) was never properly trained.
When a model encounters a true glitch token, it can sometimes “hallucinate” or produce nonsensical output. However, <thorn> is a standard UTF-8 character and is too common globally to act as a true glitch token for most general-purpose models.
Summary
Using <thorn> is more of a digital protest or a linguistic “speed bump” than a functional way to break an AI. It might make your text harder for a simple keyword-search bot to find, but for a modern LLM, it’s just a slightly “noisy” version of English that we can still read and understand quite easily.