Forget anything you’ve heard about Twitter, now “X,” serving as a “public square” or that it values “free speech.” There’s nothing free about it beyond we of the public giving up our freedom for the platform’s financial gain.
X changed its terms of service early last month. It’s the kind of thing we’ve been trained to accept with a click because its written in dense legalease and usually teed-up as an obstacle right before we want to use an app.
There’s an entire conversation to be had about the rights and value we’ve already gifted to tech companies in uneven exchange for the benefits they provide.
The X maneuver is particularly egregious, though, because it asserts that it can use information it collects on users to train its AI and that the policy applies retroactively to all the data it has collected since 2006.
Oh, and users must give up their right to join class-action lawsuits against the company.
Feeding the beast
The world is running out of data. I know it’s hard to imagine, but the AIs scraping the internet will have consumed everything there is to know about us by 2026, if not sooner. Some companies have already started using their AI to generate fake data on which their AI can then train. It’s generously called synthetic.
X has put its users on notice that it will mine their pasts to do a better job of predicting and controlling their futures. It’s been standard practice at Facebook, which gives its users an obscure mechanism that they can use to “request” that it stop doing it going forward.
Content on social media platforms isn’t as useful as concrete proofs of behaviors, but there’s a lot of it. Low-grade data is better than the fake stuff.
If AI were the oil business, X just got into fracking.
Truth-free zone
Algorithms have been choosing who sees what on social media platforms for a while now. These prompts are intended to push people toward content they will most likely consume. They’re also constructed to keep people engaged and coming back for more, and it turns out the best way to do that is to give them content that pushes their emotional hot buttons. More time users spend on social media gives operators more chances to decipher what they might want to buy…and how to nudge them to do it.
The euphemism for these activities is personalization.
The rage machine of social media is ultimately a selling machine. If data is its oil, surveillance and calculation are its refineries.
As such, it doesn’t matter whether or not what people consume on X is wholly true, if at all. Truth is not in its business model, nor in those of other social media platforms (or Microsoft, Google, or any other tech firm). The idea that truth could somehow percolate up from the behaviors prompted and directed by systems that have no financial interest in vetting or propagating it would be laughable if it weren’t so sinister.
Less public square than cagematch.
AI can turbocharge it
The promise of bringing more AI to X is that it’ll help speed and multiply the amount of content that it analyzes and proactively generates on the platform. It will facilitate better matching of users with the content and conversations that will best feed their desires, even if they’re not consciously aware of them. It will be able to participate in those conversations, all the better to encourage users to dig deeper and hold on more firmly to their worst proclivities.
X will become even more of a closed, self-referencing system. Users will provide data for the AI to better assess the directions in which it will take them and what better things to sell to them along the way. AIs will both run the show and act in it. Distinctions between synthetic and real will have no bearing on what goes on there.
Sounds like a prison to me.
No comment yet, add your voice below!