Instead, perhaps the next best practical test for separating humans from AIs is essentially, "can it make you laugh?"
The idea here is that LLMs are largely trained to be able to produce statistically likely sentences based on a massive training corpus, and while this allows for incredibly impressive query responses, it is absolutely atrocious at statistically unlikely outcomes, that is, humor.
Humor is often defined by being mentally incongruous which would necessarily make it statistically unlikely - the very opposite of what LLMs are good at.
Thoughts?
You always have to ask it for several, because a lot of them are rubbish. But the hit rate is reasonable, considering the difficulty in understanding what is funny or not to a human.
Also, if you like absurdist humour, then even the fails might work. If I could post a pic here, I would. I once asked ChatGPT for a strip comic about something. What it produced didn't make sense, but I laughed about it every time I read it. Uploaded it and got the same result from others. There was a really polite naivety in what it produced, but it was surprisingly potent.
I'm pretty sure the llm could not explain what about it was funny. Basically a vegetable accidentally killed another vegetable simply by touching it. But then it apologized. That apology was bloody hilarious.
The prompt was for a comic strip with a hook that as far as it knows, nobody has ever seen before. And the result was shockingly great.
And of course it was absolute luck, because the same prompt just spat out junk thereafter.