(Older post - 2019 I think?)
Earlier this year I had the privilege of being included on a discussion panel about the dangers of artificial intelligence. This is an area I’ve long watched and the panel went off well; I’ve been meaning, since then, to piece together my notes into a coherent blogpost. Nearly six months after the fact oughta be enough, right?
Put short and simple, the primary dangers from artificial intelligence are still us. Human intention, human development, and human deployment represent the larger AI threat profiles for now and will continue to for some time.
1. Humans are still bad at computers
Or as ComradeEvee once put it,
“Roses are red
software is no bueno
when I am god
computers go in the volcano.”
Humans continue to be bad at computering and we’ve only got ourselves to blame. In “Throwing Rocks at the Google Bus” Douglas Rushkoff likens the digital economy to the “horseless carriage” stage of automobile development, and the “moving pictures” stage of cinema. Rushkoff prefaces the statement by commenting about how the first computer workspaces imitated the physical desktops they sought to replace. I’d suggest Rushkoff got ahead of himself a little – we’re barely out of the horseless carriage stage for computers, no less whole digital economies. In very few cases have digital workspaces changed from a two-dimensional platter dominating your vision. Monitors have gotten larger, operating systems have developed multiple-workspace setups to allow for easier multitasking and task-transitions, but how far has any of it actually advanced from a desk platter and notebook?
The exceptions here tend to be virtual workspaces – of which I’m a fan, as well as a desperate desirer. There are few things I want more than a VR workspace in which I’m surrounded by cubic whiteboards than can be repositioned, spun, stickied, and hyperlinked to each other.
In addition to everyday computing not advancing as far as we’d like to pretend, we’re still laughably bad at everything that lies beneath. Not just because of how we envision the technology, but because the technology regularly takes second chair to architecture and operations budgets.
To flesh this idea out I’m going to reference one of the most exciting primary documents you can imagine: an SEC filing. For a longer version, Doug Seven tells this story magnificently.
In 2012 the financial services firm Knight Capital boasted $365 million in cash and assets. In preparation for a New York Stock Exchange initiative, Knight deployed an update to its SMARS high-speed algorithmic trading router. SMARS received large orders from other parts of Knight’s trading platform and broke them up into smaller “child” orders for execution. The update made its way to seven of eight servers successfully. Number eight, not so much.
When trading opened old code not used in nearly eight years reactivated. However, five years previous Knight moved the count tracker to a different section of code. Server 8 had no idea when its child orders fulfilled the parent order and immediately created an endless lop of more and more orders being executed on Knight’s behalf. Everyone knew a problem had occurred within the first minute of trading. During the first forty five minutes of trading, SMARS accounted for 50% of trading volume, about eight million shares per minute. Since no killswitch existed, Knight tried to debug its trading system while still running and removed the new, working code first – this left just the broken code from Server 8 still flailing. In that 45 minutes it took to kill, SMARS accounted for $460 million in losses. I refer you to the above paragraph to remind you that’s higher than the assets and cash they had available. Knight was subsequently acquired for a steal.
The Knight Capital failed deploy highlights a few dangers of AI: the first is that we still suck at deploying code, and the second is that these systems operate at such speed that we’re unable to understand them in real time, inherently incapable of understanding these systems deeply because of the speed and complexity with which they process. The stakes are amplified exponentially when the system is physically manifested in, say, a robot combat chassis.
Imagine debugging in real, human time a failed firmware update to a robot with the capability to do physical damage.
2. We don’t, and can’t, understand well enough how they operate
Piggybacking on Knight, let’s talk a little more about human conceptions of what AI is doing, in both time and understanding.
Google’s AlphaGo AI – now AlphaZero – was trained on the game Go but has come to dominate a number of human-enjoyed games from tabletop to battle arena. AlphaGo was trained on a dataset of a hundred thousand Go games, and its seminal moment is known as Move 37 – the 37th move in a game against a human opponent in which it played so counterintuitively that no person involved could understand what the hell it was doing. In examining AlphaGo’s decision process later researchers found that AlphaGo knew this – its process for each move included calculating the probability that a human player would play that move and in the case of Move 37 AlphaGo knew that probability was less than 1 in 10,000.
It’s not hard to see that not only did this neural network know its move wasn’t particularly human, it could use that probability to its advantage.
AlphaGo trained itself off a dataset of one hundred thousand games, and that training spanned about thirty million games. That AlphaGo could play so many games is another tell: it has a whole lot of time we don’t. It took several years to gain mastery level. Its next evolution took three days.
AlphaGo’s successor AlphaZero built upon the lessons learned as well as other advancements, and in-game uses significantly less power (4 processors versus 48). And it didn’t get the benefit of AG’s dataset: researchers provided AlphaZero with only a basic set of rules, and then told it to compete against itself. Instead of learning from human games AlphaZero would only learn adversarially. And learn it did.
AlphaZero reached mastery level in 4.9 million games, one sixth of AlphaGo. When pitted against AlphaGo, AlphaZero learned how to beat it within three days.
Artificial Intelligences are unknowable in the sense that we will never have enough time to analyze their decisionmaking process and understand them well enough. OpenAI’s DOTA2 record, last I checked, was 1923-9 against humans thanks to it being able to amass 45,000 years of experience in ten months time.
3. AIs are deployed according to human values
The speed of AI could be a benefit to quality of life in any number of areas. But as Rushkoff noted in his book, “When an economy has been based in exploiting real and artificial scarcity, the notion of a surplus of almost anything is a mortal threat.” And in typical form AI deployments center around its extractive abilities more than anything else – its function to separate people from their money. Machine Learning has allowed marketers to massively scale A/B testing not to find out what the customer most wants, but what effectively provides the deployer with the most income. This is apparent in metrics around interface design as well as tiny fluctuations in pricing, along with associational product suggestions.
There’s a massive problem when the only people with money to create and deploy artificial intelligences are the ones looking to extract and hoard capital. And there’s another problem: all these systems that are parsed in terms like “convenience” have no place for people it can’t classify into a consumer group.
Several times a year Twitter serves up a heartbreaking example of this: a brave woman details how Extractive AI picked up on her browsing and purchase trends, sometimes before even she knew, to conclude that she was pregnant. And so linked ad-serving platforms would begin providing pregnancy tests, then mid-pregnancy products. However there always comes a point when the user’s profile diverges from consumer classification: when she began, in this case, googling about intermittent spotting, abdominal pain, or other symptoms leading to a miscarriage.
There’s no room for the bereaved in this economy. Most people, even marketers, understand trying to sell products to a grieving person is abhorrent. But what’s coded into these classification systems is extraction, not compassion. We’ve written them to hoard and sell and do little else.
And so when the user stops using pregnancy search terms, the ad platforms jump to a conclusion that allows them to reclassify the user: that they gave birth. And should now be deluged with maternity ads. And so women grieving miscarriages suddenly see, in every ad-served place possible, maternity products for a child they’ve just lost. They get sent automated emails from credit agencies to enroll their child in credit protection. Further abhorrent things.
I’ll say it again: there’s no room for the bereaved in an extractive economy, and we’ve obviously encoded only the extraction into our machine learning services so far.
4. Lack of diversity dooms any attempt at decent AI
One of the biggest self-reinforcing failings in machine learning and artificial intelligence so far is our failure to provide diverse representation in technology and academia, and how that relates to the point directly above.
According to a recent report from AINow:
Only 18% of authors at leading AI conferences are women.
More than 80% of AI professors are men.
2.5% of Google’s workforce is black; Facebook and Microsoft are at around 4%.
Systems built by this kind of workforce do not and cannot serve people of diverse backgrounds and ethnicities effectively. Consider the automated soap dispensers that can’t detect dark skin, or the smartwatch sensors that can’t properly detect dark or tattooed skin. Consider voice assistants that can’t effectively understand heavy accents or vernacular dialects.
Put simply, AI becomes a direct threat when only one group builds it. Such systems, as AINow warns, “replicate patterns of racial and gender bias” and often more advanced “systems that use physical appearance as a proxy for character or interior states are deeply suspect.” We spent the twentieth century trying to pretend this problem doesn’t exist in assessment testing – now we’re encoding it into the machines we’ll talk to, and that will talk to us. And into machines that will look at us and render judgments, whether it’s what department in the store we’re likely to visit next, whether we belong in a given situation, or whether we’re a threat to property or person. Whether we deserve a loan, or a harsh criminal sentence.
Until we deal with these, and especially this last, we as people will continue to drive the major threats that artificial intelligences represent.