@Pavel said in AI Megathread:
However, various institutions are using flawed heuristics – be they AI-driven or meatbrain – to judge whether something is written by an LLM which do include em dashes and other common signs of professional/academic writing, and using those flawed judgements to punish students, workers, etc in ways that can dramatically impact their professional lives.
Funnily enough, em-dashes (along with emojis) are something that I use in my professional life to gauge whether or not something was written by an employee or written by AI. The giveaway isn’t that they’re included in whatever document, though. It’s that they’re formatted incorrectly.
Our brand standard font doesn’t like em-dashes and will format them like this-- which is both grammatically wrong and stylistically inconsistent (to the point that the forum software isn’t even auto-formatting it for me). You should be–if you know how to use them properly–leaving no spaces between the dash and the word. The professional writers will almost always catch this because it’s a clear grammar mistake. People who don’t know how em-dashes are supposed to work but had them inserted in by Copilot 365 or ChatGPT usually don’t notice the error.
It’s the same with emojis. The ones that you can make in Microsoft programs using keyboard shortcuts or from the available list in Teams have a totally different visual style than the ones that ChatGPT, Writer, and other LLMs spit out. If they show up in a piece, it’s almost always a dead giveaway that someone copied that content from an LLM into Word, Outlook, or Teams, and literally every time I’ve asked someone if they used AI to write that after seeing an out of place emoji, they excitedly confirmed they did.
The thing is, I’m not a manager and I’m not HR. The only repercussions they’re going to face from me judging something as “AI wrote that for them” is me making a bitchy little face behind my computer. If anyone else notices, they’ll probably be praised for being more efficient–right up until someone in leadership wants to know why something isn’t right.
Needless to say, I have a lot of Big Feelings about AI because my company uses it, I’m expected to write about what we use it to do for the public, I’m expected to write about how to use it better for our employees, and people think it can replace parts of my job. Which it can! And does! Often poorly. Especially when people don’t understand how it works, what it actually does, or that artificial intelligence is a terrible misnomer and it should likely be called ‘automation’ instead.