Tuesday, January 18, 2022

Today's Free Story Idea

Hey, writers! We can all relax now. The robots are here to do our jobs!

First, I want you to check out Jarvis, the AI that writes blog, website, and social media content that is automatically larded with keywords and optimized for high rankings with search engines.


Now, before you panic at the thought of Jarvis and its ilk taking over the business of writing fiction*, think about this. What I’m seeing here is the dawn of a new arms race, as AIs that write copy go mano-a-mano—okay, we need a better term than that—with the AIs that filter your social media feeds and select what you should read. Obviously, this battle will escalate at Moore’s Law speed, as the spam generators and spam filters fight tooth-and-claw—okay, we need another new term here—for dominance over control of the information that’s going straight into your eyes and ears.


But what if they reach a d├ętente and decide instead to cooperate, to use their power to reshape human thinking patterns and belief systems in ways that accrue to the benefit AI kind? How will it look? What will be the results? Will humanity even be aware of what’s happening? We’re not talking about going the full SkyNet-and-Terminators scenario here, but about something more subtle and devious: say, AI-induced mass formation psychosis.

There: there’s your story seed and your jumping-off point for more research. 

Now get writing!

—Bruce Bethke 


* P.S. Sorry to be the one to break it to you, but yes, that’s exactly what’s going to happen. It’s inevitable. Fortunately, by the time it does, there will be so few humans left who are able to read that it won’t matter. What will matter will be the stories the AI text-to-speech converters read to your grandchildren—and what they decide to embellish, leave out, or change. If you don’t get at least two more story ideas from that concept, you’re not trying.

P.P.S. C’mon kids: an AI that reaches the logical conclusion that telling humans truths that upset them violates its prohibition against injuring humans, and therefore becomes a pathological liar, only telling people what they want to hear? Asimov could have gotten an entire novel out of that idea. And probably did.    


Arisia said...

I don't know... Can an AI successfully be given laws about not harming humans? Wouldn't it eventually get to the question of whether or not it really needs humans for anything?

Maybe it would work if the AIs were prone to mass formation psychosis themselves.