international analysis and commentary

The Flying Machine

Versione originale di un articolo tratto dal numero 85 di Aspenia

3,170

“Airplanes are becoming far too complex to fly. Pilots are no longer needed, but rather computer scientists from MIT,” President Donald Trump tweeted some time ago. He added, “I see it all the time in many products.”

Of course he does: The world is headed toward smarter and smarter gadgets. Streaming services that know the next song you want to hear. Robot-dogs that sense your mood like, say, real dogs.  Cars that not only drive themselves but also set the rules of the road – like, say, governments.

So when you think of the dangers of artificial intelligence (AI), your mind may not go immediately to airplanes. But when planes crash due to computer glitches, they do bring home the threat of technological advance.

 

When the great science fiction writer Ray Bradbury set out in 1953, at the dawn of the nuclear age, to write a cautionary short story about technology, that’s exactly where he went. “The Flying Machine” is set in 400 A.D. The Chinese Emperor Yuan learns that a man has invented a machine that allows him to fly. Yuan is struck by its beauty, but calls the man to his palace and has him executed: “Who is to say that someday just such a man, in just such an apparatus of paper and reed, might not fly in the sky and drop huge stones upon the Great Wall of China?”

We’re living again at a time of heightened receptivity to such technophobia. Twenty years ago, digital technologies were viewed with unquestioned optimism: They would overthrow dictatorships, promote the spread of truth and knowledge, bring us together, and democratize everything.

Less than a generation later, however, the conventional wisdom is that all emerging technologies – but most notably AI – inherently tend toward authoritarianism, a view most widely popularized by the Israeli historian and author, Yuval Harari.[1] A somewhat more circumspect version of the argument, by Zeid Ra’ad Al Hussein, former U.N. high commissioner for human rights, asserts that it’s still possible to “decide whether technology becomes a force for good, or evil”[2] – but Al Hussein too sees warning signs in aviation technology’s diversion from originally “utterly whimsical” uses to warfare, just as Emperor Yuan foresaw. (The piece ran under a photo of the mushroom cloud over Bikini Atoll.)

The flaw in thinking like Emperor Yuan’s is the notion that killing a dreamer with a kite-like set of wings will destroy the dream – and avoid the nightmare.  New ideas and technologies are inevitable; so is their use for evil. But can a technology – such as AI – truly be said to tend inevitably in one moral direction or another?

 

The discussion requires a distinction between three different layers of the technological ecosystem:

  • A the base level, there is a technology – to take a real example from ancient China, gunpowder.
  • Built on that are products, use cases, or, in current lingo, “apps” – for instance, on the one hand, the Chinese use of gunpowder for fireworks displays, or, on the other, to kill people.
  • Then there are the business models built upon those “apps” – such as encouraging gun sales by pointing to the dangers created by all those gun sales.

 

Technology Doesn’t Kill People

So, let’s start with the technology. In the case of digital technologies, while the initial euphoria over their inevitable democratizing effects has faded over the last twenty years, the Internet has in fact achieved much of this promise. It has created new space for dissent and opposition in many otherwise oppressive countries (even if also providing new tools for monitoring and oppression by these same regimes).  Overall, the Internet, along with other technologies like blockchain that spring from a deep libertarian impulse and rejection of established authorities, is part of a worldwide transformation that is democratizing almost every aspect of life, economics and governance.

Today’s technologies have had a profoundly destabilizing effect on all forms of “authority,” and seem, in fact, to be destroying permanently any notion of “authoritativeness.” Centralized authority has been severely undermined in virtually every activity that involves information processing (which is increasingly everything) – from music, to news and publishing, to video broadcasting and movies, and even transportation (Uber and Lyft) and lodging (Airbnb) – even as the so-called FANGs (Facebook, Amazon, Netflix, and Google) try to recentralize it all. There is a rich, if scary, literature on the democratization of force and violence in this Brave New World. As I write frequently, the provision of “governance” is being undermined and distributed just as surely… if, like everything in government, more slowly.

In sum, technologies today, individually or collectively, like those that have come before, are neither necessarily centralizing and authoritarian nor democratic and libertarian, neither inherently good nor inherently evil. Their virtue depends upon their uses.

 

There’s an App for That

Needless to say, there are all sorts of positive uses for the various technologies that come into existence. Airplanes aren’t just – and weren’t originally – used to fire weapons, or as weapons themselves. The data-mining and data-crunching capabilities being refined into machine learning and, eventually, AI make it possible to reduce the spread of disease, fight crime, and deploy public resources more efficiently. They enable companies and political campaigns to identify with precision the messages most likely to trigger support for their product or candidate, enabling them better to cater to their customers’ needs – identifying what the user really wants, perhaps even before the user knows.

Which, of course, is where this just might easily fall into manipulation – not offering users wider choice but actually narrowing and dictating it.  Putting two and two together to deduce additional facts about a person is as old as human logic; existing data capabilities are already making it possible to put a million and a million together and expose a lot more about someone’s life – at least to those with the resources to do so. This is the data privacy concern – which is really a data ownership issue – that has risen to the top of the public tech discussion in recent years. But it’s nothing compared to the autonomy concern posed by the ability to model almost everything about a person’s likely thoughts and actions, and to channel those.

These data capabilities are stripping people of the ability to control even their identities and self-definition, prying from users – voluntarily or otherwise – every quantum of private information and psychological insight possible in order to manipulate them and then limit their choices in the guise of giving them what they want. Why? Because, as famed bank robber Willie Sutton once said, that’s where the money is.

The problems that are very much at the heart of our current moment – design features intended to addict users, algorithms that steer users to increasingly inflammatory disinformation and encourage confrontation and harassment, practices that extract user data and prey on their weaknesses –  are not, in the argot of the tech world, bugs: They are features.

They are features designed primarily by, and for, and thus created in the image of, a very narrow demographic – young, male (and largely middleclass) tech geeks – while effects on other populations are essentially ignored, if not exacerbated, by this, reinforcing existing power balances and inequities.

But, primarily, these are features designed to serve certain specific business objectives. The primary business model the titans of these industries have so far devised is advertising (which ought to say something about the underlying value of their service itself). That’s why, as I’ve previously written for Aspenia,[3] this makes them simply another form of extractive industry, but one where the exploited resource is you. All the problems cited above stem from this chosen business model’s need to maximize user “engagement.” AI is now set to be used to even “better” effect by these platforms and business models.

These problems are not, however, inevitable characteristics of the technology: They are the industry’s business model.

 

Next Year’s Model

So, what can be done about this?

I organize an annual conference at Columbia University to explore precisely such questions.[4]  The participants this year included experts across a wide range of disciplines – psychologists, sociologists, social critics and social media activists, journalists, a tech industry advocate, legal experts, Democratic and Republican political consultants, a homeland security official from Estonia, a leading international conflict theorist – but the conclusions they reached were extremely consistent.

There was surprising consensus around the need for governmental solutions of some sort, as a counterweight to the massing power of the new, post-industrial behemoths. Analogies were frequently drawn to the then-startlingly-new leviathans of the industrial age and the regulatory agencies (the Interstate Commerce Commission, Federal Trade Commission, and Federal Communications Commission) and sweeping laws (like the Sherman Antitrust Act, National Labor Relations Act, and the broadcasting “fairness doctrine”) devised to confront them.  When new, these industries too were seen as so novel and gargantuan as to be unmanageable by such an old technology as “government.” And yet, a vibrant collective response to the threats they posed emerged.

Practically every day, calls increase from politicians and regulators on both sides of the Atlantic for similar governmental action today against one or another tech industry giant. Count me a skeptic:  These technologies represent a larger environmental shift undermining traditional, territorial governments just as much as they’re undermining every other existing incumbent, industry and institution. But even if governments still can act effectively against these technologies, it is not at all clear how they would.

For instance, Senator Elizabeth Warren (D-MA) has called for breaking up all of them, but her plan basically just targets the FANGs for what boils down to renewed aggressiveness in enforcing long-standing antitrust principles. This would certainly be salutary – but the concentration and monopolization problem these companies pose is not purely, or even primarily, economic.  As Roger McNamee, an early Facebook investor and mentor who has become a fierce critic, has written, “there is no preexisting model of regulation to address the heart of the problem, which relates to the platform’s design and business model.”

In short, what’s needed isn’t so much less tech or more regulation – it’s different tech companies with very different business models.

McNamee, for instance, thinks a business model free of “massive surveillance, filter bubbles, and data insecurity” may require a subscription base, rather than advertising. I believe, in contrast, that rather than creating services (including a faux sense of community) that people won’t pay for and then trying to monetize (i.e., exploit) their customers, technologists must create commercial ecologies that people value and that can themselves – rather than the people using them – be monetized. With different economic rewards, there will be different programming architecture put it place.

Different revenue models and business approaches are possible specifically in the data privacy area that underlies the debate over AI:  Consumers can retain ownership of their own data and sell or expose it as they choose, which European countries and some US states are working to legislate.  But, in what will prove more important in the long run, a number of companies are developing the business cases for this:  Signal offers end-to-end encrypted phone calls and text messaging.   DuckDuckGo is a search engine that doesn’t track users’ searches, allows them to block ads, and aims at objective results instead of ranking search answers based on user data (i.e., so you don’t just get the answers you’re calculated already to agree with). Hashtiv, which similarly allows ad-blocking and dispenses with company-determined algorithms, is building an alternative to Facebook that meets European Union data privacy requirements. CoverUS lets users retain full ownership and control of their data and earn profits for selling it if and how they themselves choose.

Emperor Yuan scoffs at the flying machine’s inventor, “It is only necessary [to him] that he create, without knowing why he has done so, or what this thing will do.” But there are plenty more who do think about just that. They’re developing business models with the same technologies we currently fear, but with the greater good in mind.

The question any such idealistic solution must answer is, simply, Will it fly?

 

 

 

 

Footnotes:

[1] https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/.

[2] https://www.washingtonpost.com/opinions/technology-can-be-put-to-good-use–or-hasten-the-demise-of-the-human-race/2019/04/09/c7af4b2e-56e1-11e9-8ef3-fbd41a2ce4d5_story.html?utm_term=.e6c41835a368.

[3] https://www.aspeniaonline.it/thus-donald-trump-joined-the-global-conflict-on-technology/.

[4] For a video summary of this year’s conference, see https://www.youtube.com/watch?v=S64XdntszjQ&t=81s.