AI summaries are new social media future
Stay with me. This is going to make sense by the time we're finished. This does, kind of, tie into yesterday's piece that also featured AI because I saw both stories side-by-side and hooboy did that kick off some feelings.
In this Grauniad article, it was reported Google had been removing AI summaries from specific search queries related to a liver function test. Why? Because the summaries were giving advice about test result outcomes varied from what would typically be considered normal by a medical professional.
Typing “what is the normal range for liver blood tests” served up masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients, the Guardian found.
The article went on to provide a fairly straight forward conclusion as to why this was problematic.
The summaries could lead to seriously ill patients wrongly thinking they had a normal test result, and not bother to attend follow-up healthcare meetings.
Which brings me to my day job - fall protection and height safety.
For the uninitiated, falls from height is the biggest killer of people at workplaces in Australia. And the numbers are not exactly going down.
The only thing that kills more workers is driving to the job site to start with.
I know, for a fact, that people are asking tools like ChatGPT and Google Gemini for advice about height safety and fall protection. How? Because the analytics I can see from the day job's website show a handful of people are arriving there having been referred by ChatGPT or another AI tool.
What advice-shaped objects are these tools giving people? And what is the likelihood they are going to check what is being presented to them as fact?
Truth be told, we have actually been here before - the rise of social media.
Back in the late-2000s and early 2010s, as social media was becoming more and more of a thing that moved beyond small communities of nerds and students into something mainstream, it sold as giving users unbridled access to the latest news, variety of viewpoints and a firehose of information direct from the source.
Of course the problem with that is it relies on the user to sort through, filter and verify what they're being old. Put another way - social media worked as a news source only if you were willing to act as your own news director. That bit was left out of the sales pitch.
Not everyone is cutout to be a news director. Not everyone wants to be a news director. The majority of people are quite happy to be consumers of content, although they are wanting to choose what they see they don't want to be involved in the actual production.
To make social media useful, that could not be the case. Being a bystander didn't work. Look at the state of things now. The downfall of mediated information dissemination was a contributing factor to [waves hands in the direction of All This].
The same problem exists now with AI summaries and LLMs in general. They look like they are reports from the source, based on some sort of fact. But it can only really be used properly if you are going to evaluate every piece of the response and verify it.
And what percentage of people using these tools are going to do that? For the rise of social media, it was not too many at all. I cannot imagine with AI how it is going to be any different.