Google’s latest plans to use its Bard AI to put summaries above search results only further confirms what’s always been true about how we access content on the web.

We’re told what we should know.

It starts as soon as we start typing a question and the search interface offers to complete it. This channels us into predetermined directions of inquiry, all in the name of making the experience faster and easier. But in doing so, it takes the process out of our hands. We’ve been categorized. Channeled. 

The results we get are served up according to opaque rules that determine site popularity. I know it has something to do with volume of links and visits, but that metric is constantly gamed, and it’s driven by a bizarre, quasi-religious belief that somehow anonymous crowds and secretive algorithms find truth better than informed individuals.

Then come the ads, assuming you haven’t turned them off. Sales pitches related to the queries that the search engines finish for us. How do they get on the web page? No idea, but my bet is that the entire setup sees us not as seekers of information but rather as customers looking for stuff to buy, and that companies pay for the privilege of making those offers. 

And then there’s the matter of what doesn’t appear. We clueless about why we’re shown certain results and we have no idea what’s been ignored or degraded in the listing. We also don’t know why, except for our unspoken faith in the mechanism’s nutty  secretive selection process. 

Dissenting opinions or methods of analysis or ranking? They’re not presented unless they’re included in the search term, which the engine finished for us and would get teed-up with their own secretive ranking and related ads. 

Each query puts us into a bucket that constrains what we’ll be shown based on what is the easiest and fastest way to satisfy our presumed interest…which is usually to purchase something, now or later.

Thank goodness Internet search has replaced the biases of informed knowledge with the passionate and often vague delusions of the crowd. It makes it easier for marketers to categorize us and then get to the real business of telling us what to buy. 

Ultimately, there’s nothing thoughtful, fair, balanced, nuanced, or otherwise true about its mechanisms other than that it serves the commercial interests of those who influence, fund, and provide it.

Google’s new AI will add another subjective layer separating us from objective, or at least independently informed truth. 

Its chatbot will provide a conversational summary of results, as if another person is answering whatever question we asked of it. It’ll come across as human. Accessible. Truthful.

The only truth is that it will further codify an established mechanism for telling us what it thinks we should know, not necessarily what we’re searching for.

That’s the AI bias that should scare us most.

Recommended Posts

No comment yet, add your voice below!


Add a Comment