Some people are already dumb enough to kill someone else. Maybe the South Korean Conveyor Belt Killer was smart enough to do it, too?
Continue readingBletchley AI Declaration Falls On Deaf Ears
I’m becoming increasingly frightened by the governmental reaction to the opportunities and challenges of AI development.
Continue readingMarc Andreessen is Wrong About AI and Tech
The thing is, like sex, everything in the world is about technology except technology.
Continue readingAI Makers Find Refuge In Bureaucracy
The more we talk about tech will mean the less we understand about its impacts and effects.
Continue readingShoutout To The AI Apologists
The Status Quo kinda feels like a make-work gift for management consultants and overzealous bureaucrats and, as the outcomes of social media tech painfully show us, failing to apply our “old” insights and expectations to new tech doesn’t obviate the former, it just means we’ll suffer the consequences of ignoring them.
Continue readingIt’s Time To Stop Talking About AI Ethics
Companies are responsible for what they do. So are individuals. The idea isn’t separate from the overall business. It’s central to it.
Continue readingTwitter Will Become An AI-Run Prison
Forget anything you’ve heard about Twitter, now “X,” serving as a “public square” or that it values “free speech.” There’s nothing free about it beyond we of the public giving up our freedom for the platform’s financial gain.
X changed its terms of service early last month. It’s the kind of thing we’ve been trained to accept with a click because its written in dense legalease and usually teed-up as an obstacle right before we want to use an app.
There’s an entire conversation to be had about the rights and value we’ve already gifted to tech companies in uneven exchange for the benefits they provide.
The X maneuver is particularly egregious, though, because it asserts that it can use information it collects on users to train its AI and that the policy applies retroactively to all the data it has collected since 2006.
Oh, and users must give up their right to join class-action lawsuits against the company.
Feeding the beast
The world is running out of data. I know it’s hard to imagine, but the AIs scraping the internet will have consumed everything there is to know about us by 2026, if not sooner. Some companies have already started using their AI to generate fake data on which their AI can then train. It’s generously called synthetic.
X has put its users on notice that it will mine their pasts to do a better job of predicting and controlling their futures. It’s been standard practice at Facebook, which gives its users an obscure mechanism that they can use to “request” that it stop doing it going forward.
Content on social media platforms isn’t as useful as concrete proofs of behaviors, but there’s a lot of it. Low-grade data is better than the fake stuff.
If AI were the oil business, X just got into fracking.
Truth-free zone
Algorithms have been choosing who sees what on social media platforms for a while now. These prompts are intended to push people toward content they will most likely consume. They’re also constructed to keep people engaged and coming back for more, and it turns out the best way to do that is to give them content that pushes their emotional hot buttons. More time users spend on social media gives operators more chances to decipher what they might want to buy…and how to nudge them to do it.
The euphemism for these activities is personalization.
The rage machine of social media is ultimately a selling machine. If data is its oil, surveillance and calculation are its refineries.
As such, it doesn’t matter whether or not what people consume on X is wholly true, if at all. Truth is not in its business model, nor in those of other social media platforms (or Microsoft, Google, or any other tech firm). The idea that truth could somehow percolate up from the behaviors prompted and directed by systems that have no financial interest in vetting or propagating it would be laughable if it weren’t so sinister.
Less public square than cagematch.
AI can turbocharge it
The promise of bringing more AI to X is that it’ll help speed and multiply the amount of content that it analyzes and proactively generates on the platform. It will facilitate better matching of users with the content and conversations that will best feed their desires, even if they’re not consciously aware of them. It will be able to participate in those conversations, all the better to encourage users to dig deeper and hold on more firmly to their worst proclivities.
X will become even more of a closed, self-referencing system. Users will provide data for the AI to better assess the directions in which it will take them and what better things to sell to them along the way. AIs will both run the show and act in it. Distinctions between synthetic and real will have no bearing on what goes on there.
Sounds like a prison to me.
Is NYC’s Robocop Silly?
I wonder if the implicit threat of using robots for neighborhood policing factored into the last negotiation and/or is intended to temper the next one?
Continue readingThe First AI War Has Begun
The first war between AI and human beings is being waged in America. We should take note of its scope and portent.
Continue readingIs It Possible To Build Moral AI?
Humility and an awareness of one’s limitations used to be considered a positive attribute. Other morals, like doing no harm, respecting others, and taking responsibility for your actions, were similarly assumed to be intrinsic to what it meant to be a citizen and good person.
Continue reading