Antony Johnston
Award-winning New York Times bestseller
SIGN UP for Antony’s newsletter


Headline Act

The biggest danger when writing stories based on modern technology is seeming dated by the time people read them – especially when you're trying to speculate on how the tech in question might develop in the very near future.

Charlie Stross said it ten years ago, and it's still largely true: the incessant acceleration of technology development has resulted in a world where it's almost impossible to write near-future SF without looking like a fool by the time the book comes out. Imagine if you decided, six months ago during its seemingly inexorable rise, to write a story where Bitcoin takes over the world and crashes markets. You'd be rather embarrassed by its current sorry state – a state which, of course, could easily change yet again.

This is on my mind because of a couple of audience questions from Thursday's Exphoria Code event; how real is the tech in the book, and do I spend all day reading online articles about technology to put in my work?

First; the Exphoria Code tech isn't 100% achievable right now, but it's much closer than most readers probably realise. Rudimentary UAV self-guiding systems are already operational; the book's “Exphoria project” is simply an evolution, albeit a significant one, of those methods. Second; I'll take the fifth.

Well, all right. It's true I spend a lot of time reading articles about tech and wondering how I could use them in stories (see my piece on 'Zombie Satellites', for example). But what's really valuable isn't the stories themselves – after all, those are things that have already happened. But thinking how the principles behind the stories could be extrapolated is useful.

Two articles that recently dropped into my feed reader (yes, some of us still use RSS) come to mind.

The main one is a brilliant example of William Gibson's oft-stated proverb 'the street finds its own uses for things'; it seems organised crime outfits are laundering money through Amazon by using cloned IDs of real authors to create fake ebooks of gibberish, set a ridiculous price, then self-purchase them so the money gets washed through Amazon and returned to the scammer's account.

It's kind of genius, because in an age of digital goods the cost of making the product is as close to zero as makes no odds – the contents and titles are algorithmically generated, the upload process is automated, and Amazon has spent literal decades making the process of purchasing goods as fast and frictionless as legally possible. Someone wants to spend $555 on a book nobody's ever heard of? Amazon's systems don't care, don't want to care. Caveat emptor, and just process the damn transaction already.

But it's why Amazon's systems don't care, and allow a book of gibberish to be sold for an outrageous price, that make up the interesting part of this story; the part that could possibly be cut out to form the foundation of something new.

With ebook self-publishing exploding in popularity, there is now more 'user-generated' content uploaded to Amazon than humans could possibly read in order to verify that it's 'real'. Now, first of all, what is real? Any human editor tasked to check and verify could easily reject Chuck Tingle's latest opus1, even if they had time to read it. But they don't – so why not put AI on the case? We keep hearing about 'deep learning' and all that, right?

Yes, we do. But we also keep hearing about how terrible machines are at separating gibberish from intelligible grammar. So, much like YouTube, Amazon instead has a policy of “allow everything, investigate only when flagged by another user”. Just one more way in which users are freely doing work that increases a corporation's value… but I digress.

And YouTube is the other side of this algorithmic nightmare, as it was revealed yesterday the YouTube Kids app – the one you're supposed to be fine with leaving your children to swipe around in all day – has been surfacing conspiracy videos from the likes of David “the world is run by human-lizard hybrids from space” Icke. How did this happen? Probably because Icke's videos normally take the form of lectures, so there's no profanity, no sex or violence; just an ex-soccer player who realised he could make more money as a scam artist spouting conspiracy theories to frightened, gullible people all over the world. Including your kids. But, hey, no nipples!

(“Sound of a thousand SF writers furiously typing”, as I often comment on Twitter.)

We live in a world where two of the largest global platforms of consumed content are effectively unregulated spaces that can be scammed by criminals to launder money, con men to spread their (highly profitable) conspiracies… and who knows what else?

That's the interesting part, for a writer. If these fairly blatant examples can go undetected until a user happens to stumble across them, what subtler shenanigans might be taking place? And would anyone even notice?


Originally published March 2018

1At time of writing it was widely believed that Tingle's work was an AI-generated prank. Of course, we now know better . . . and if anything, this confusion only reinforces the point of the essay

« Go back to the ‘Non-Fiction’ index