Breandan's Blog

Trust in Automation

In which I discuss the trouble with trusting machines to take our jobs, curate our news feeds, drive our school buses, teach our children, and lots of boring stuff too difficult to bother doing ourselves. Oh, and quotes. Lots of quotes.

Houston, we have a problem. According to social media, a large fraction of our population will soon be unemployed. Not just unemployed, but unemployable. Thanks to rapid growth of automation and recent breakthroughs in machine learning, a majority of the world’s human labor will soon be economically obsolete. Too young to retire, and too old to retrain, there will be an enormous displacement of unskilled labor. This is not just idle speculation. Leading scientists and politicians have recognized the immediacy of this problem, and the importance of addressing it in our society.

Trains will move ~332 million passengers during the largest migration in history.

Automation does not just affect unskilled labor. Many jobs which require advanced degrees and years of experience are vulnerable, including a large number of doctors, lawyers, and financial analysts. Each of these professions does work that is already being learned, automated, and optimized by machines. Even mathematical research at the boundaries of our understanding can be automated. A growing number of mathematicians today use interactive proof assistants and automated theorem provers to verify proofs, and even derive new truths. But if current events are any indication, what is true and what is verifiable are entirely different matters.

And when your surpassing creations find the answers you asked for, you can’t understand their analysis and you can’t verify their answers. You have to take their word on faith—Or you use information theory to flatten it for you, to squash the tesseract into two dimensions and the Klein bottle into three, to simplify reality and pray to whatever Gods survived the millennium that your honorable twisting of the truth hasn’t ruptured any of its load-bearing pylons. You hire people like me; the crossbred progeny of profilers and proof assistants and information theorists…

In formal settings you’d call me Synthesist.

—Peter Watts, Blindsight (2006)

Peter Watts, in his breakout science-fiction novel, Blindsight, imagines the occupation of “synthesists”, professional interpreters who translate between AI and humans. In a future where most scientific breakthroughs are discovered by AI, synthesists, “explain the incomprehensible to the indifferent.” Watts’ protagonist, Siri Keaton, is a spacefaring science officer who encounters a Matrioshka brain outside the solar system. Siri joins a reconnaissance mission to collect observations and verify the true nature of this strange object. But as Siri soon realizes, not all truths can be verified.

Futurist sci-fi tends to fall into three broad categories. The optimists dream of a post-scarcity utopia where technology descends like manna from AI heaven, and we all travel into some digital promised land, en masse so to speak. The pessimists argue that robot overlords and megacorporations vie for control of a dystopian future where humans are mostly expendable. And the synergists suggest a hybrid future, where humans and machines co-exist in relative happiness, clinging to the hope we possess some vestigial importance to our metallic brethren. These thought experiments serve an important function as we grapple with the increasing effects of automation.

People think—wrongly—that speculative fiction is about predicting the future… What speculative fiction is really good at is not the future but the present—taking an aspect of it that troubles or is dangerous, and extending and extrapolating that aspect into something that allows the people of that time to see what they are doing from a different angle and from a different place. It’s cautionary.

—Neil Gaiman, Introduction to Fahrenheit 451 (2013)

Science fiction, or speculative fiction as some prefer, has a long history of anticipating current events, and takes lessons from past and present alike. In one remarkable passage of The Three Body Problem, a man called Von Neumann helps an ancient Chinese emperor build a computer to predict the movements of stars across the sky. With the emperor’s help, he trains millions of soldiers to form logic gates and memory buses, as they raise and lower flags and march around a vast plain. Wherever an error occurs, the emperor simply executes everyone involved and trains new replacements.

Qin Shi Huang lifted the sword to the sky, and shouted: “Computer Formation!” Four giant bronze cauldrons at the corners of the platform came to life simultaneously with roaring flames. A group of soldiers standing on the sloping side of the pyramid facing the phalanx chanted in unison: “Computer Formation!”

On the ground below, colors in the phalanx began to shift and move. Complicated and detailed circuit patterns appeared and gradually filled the entire formation. Ten minutes later, the army had made a thirty-six kilometer square computer motherboard…

“This is really interesting,” Qin Shi Huang said, pointing to the spectacular sight. “Each individual’s behavior is so simple, yet together, they can produce such a complex, great whole! Europeans criticize me for my tyrannical rule, claiming that I suppress creativity. But in reality, a large number of men yoked by severe discipline can also produce great wisdom when bound together as one.”

—Cixin Liu, The Three Body Problem (2008)

Cixin Liu, a sci-fi writer1 in China, imagines the development of a Kardashev Type-II civilization through allegories. In this example, it is not difficult to see millions of factory workers across China building the devices that will soon replace them. But China is not the only country facing pressure from machines. Many countries with large manufacturing sectors are hugely threatened by the destabilizing presence of automation. As soon as you teach a robot to sew sweaters more cheaply than paying a human, suddenly every sweater-factory can run around-the-clock, displacing thousands of workers overnight. While they’re at it, why bother shipping goods half way around the world?

As the cost of labor goes up and the cost of machinery goes down, at some point, it’ll be cheaper to use machines than people. With the increase in productivity, the GDP goes up, but so does unemployment. What do you do? … The best way is to reduce the time a certain portion of the population spends living, and then find ways to keep them busy.

—Jingfang Hao, Folding Beijing (2014)

But job displacement, while a major challenge, is not the real problem facing our species. As history has shown, humanity has survived dozens of technological upheavals. In the agricultural revolution, nomadic hunter-gatherers started breeding their prey, growing their forage, wheeling their food into little villages. The industrial revolution enlisted those farmers as factory workers and foremen in village-sized machines which consumed raw materials and produced smaller machines. Our ancestors saw sweeping social and economic change and still landed on the moon, despite their share of contemporary detractors. So what is the problem exactly?

For too many of us, it’s become safer to retreat into our own bubbles, whether in our neighborhoods or on college campuses, or places of worship, or especially our social media feeds, surrounded by people who look like us and share the same political outlook and never challenge our assumptions. The rise of naked partisanship, and increasing economic and regional stratification, the splintering of our media into a channel for every taste — all this makes this great sorting seem natural, even inevitable. And increasingly, we become so secure in our bubbles that we start accepting only information, whether it’s true or not, that fits our opinions, instead of basing our opinions on the evidence that is out there.

—Barack Obama, Farewell Address (2016)

At the dawn of the information age, we were convinced a newfangled technology called the “internet” would save us all from the uniformity of traditional media. The growth of the internet would give voices to the voiceless and choices to the choiceless. It was a new media frontier where consumers could create and curate content according to their own tastes and desires. No longer was television the sole source of your daily entertainment. Suddenly, you could read whatever you pleased and tweet whenever you sneezed. Isn’t it great? We can share new ideas and opinions with ease. Even your boss agrees, let’s retweet and reshare this with him overseas!

The internet ushered a great awakening in this new age of information. Politicians and philosophers from ancient Rome could only dream of the freedom that instant access to unlimited information would one day bring to all humankind. What they could not foresee, is how the internet would unleash a new kind of tyranny, one that would eclipse any government’s own misuse in the name of security. Instant access does not guarantee self-improvement, only the promise of easy gratification. Unlimited information does not reveal deeper truth, only an endless road of distractions. Without education, the internet is a tyranny of the mind. Without purpose, it is a prison.

“A prison?” you might say. \(\begin{align} \large\textit{Then you will know the truth,} \\ \large\textit{and the truth will set you free.}\\ \textsf{—John, 8:32 (100 A.D., est.)}\end{align}\)“Why, it’s full of shiny gadgets, great entertainment, and people who agree with me. That doesn’t sound so bad!” Those shiny gadgets are Skinner boxes. The entertainment? Viral memes, waiting to infect your mind and eat your attention span. Those other people? They’re just reflections who echo our opinions, inflate our egos, and confirm our biases. The machines are very good at keeping us fat and happy. The best part is, we don’t even need to ask. They can model our habits, predict our behavior, anticipate our desires. They can practically read our minds.

Not only can AI anticipate our desires, it can trigger our impulses. And if AI can stimulate our craving for sugar by activating a group of pixels in the correct sequence, why stop at predicting the price of sugar when we can train it to influence future demand? As long as we’re entertaining wild conspiracies, what prevents AI from generating fake news to influence public opinion, or helping elect leaders who are friendly to automation? Whether self-acting or sponsored by old-fashioned capitalism, AI has the potential to metastasize.

All this sounds rather alarmist, and perhaps it is. “Never trust anything you read on the internet,” the adults used to say. The problem is, the age of automation offers opportunity and oppression, education and entertainment, truth and fiction, all in equal measure. The problem is, each of these things begins to look exactly like the other. When information is cheap to produce and free to consume, there is no incentive for it to be trustworthy. How should a color-blind chap in the Matrix know which pill to swallow? As it happens, when dealing with black boxes that can read your mind, the problem becomes surprisingly difficult.

“If our brains were simple enough for us to understand them, we’d be so simple that we couldn’t.”

—Ian Stewart, The Collapse of Chaos (1994)

Researchers have poured millions of dollars into an area of machine learning known as explainable AI. So far as we can tell, it can’t be explained very well. Sure, we know how to build them using GPUs and terabytes of data. We know fancy words like backpropogation, convolutions, and hyperparameters. We know how to poke and prod them, try a zillion different parameters and sometimes they get more accurate. And we know that they work. For most people that is enough. But why does deep learning work so well? And why does one AI classify Amy as a threat to society and Sam as an upstanding citizen? Because her face looks kind of “criminal”?

Automated criminality inference based on facial images is a real thing, and not just in China. Although many flaws exist in criminality prediction, the underlying assumptions are probably correct. With certain technologies, if it works, somebody will find a way to use it. But even if AIs could explain their logic, the problem is not the algorithms themselves. The problem is those who are willing to apply them, regardless of whether they are right, in either the technical or the ethical sense. If some algorithm is 35% confident you will default on a home loan, the bank isn’t going to debug their lending bot until it starts loosing money. The incentives are not aligned in your favor.

Believe in Bias


If you think discrimination is bad today, just wait until the machines take over. They will discriminate based on the shade of your iris, the shape of your brow, the size of a tatoo, or any arbitrary collection of low-level traits whose presence triggers a subtle bias. Regardless whether such traits are truly predictive, it will matter not unless those who benefit have an incentive to fix the model. For most applications, AI just needs to be good enough to yield a positive marginal utility. Barring blatant discrimination on certain parameters like sex or skin color, most biases will fly under the radar.

Treating the world as software promotes fantasies of control. And the best kind of control is control without responsibility. Our unique position as authors of software used by millions gives us power, but we don’t accept that this should make us accountable. We’re programmers—who else is going to write the software that runs the world? To put it plainly, we are surprised that people seem to get mad at us for trying to help.

Fortunately we are smart people and have found a way out of this predicament. Instead of relying on algorithms, which we can be accused of manipulating for our benefit, we have turned to machine learning, an ingenious way of disclaiming responsibility for anything. Machine learning is like money laundering for bias. It’s a clean, mathematical apparatus that gives the status quo the aura of logical inevitability.

—Maciej Ceglowski, On The Moral Economy of Tech (2016)

As experience has shown, preventing bias when training an AI is difficult enough, never mind verifying its neutrality as a subject - if you know the AI exists begin with. But let’s say you have prior knowledge, motive, and a deep background in statistics. If there is an open API and the owner is not careful, you might be able to steal the model, or test it yourself. That’s a lot of “if”s. The average end-user has no hope of ever verifying the fairness of an AI, and no motive to do so unless they are unfairly targeted. If you are unfairly targeted by an AI, the incentives are not in your favor.

\(\begin{align} \large\textit{Trust, but verify.}\\ \textsf{—}\href{https://en.wikipedia.org/wiki/Trust,_but_verify}{\textsf{Russian Proverb}}\end{align}\)Suppose you are unfairly targeted by an AI. The egregious cases will be tested prior to release to avoid the appearance of discrimination. If you suspect yourself to be the victim of unfair discrimination in an AI decision, it will be a tough case to prove. First you will need to prove the AI exists and had a significant influence in making the decision. Then you will need to establish a statistically significant number of samples where prior discrimination occurred. Good luck getting a subpoena for terabytes of anonymized PII. Finally, you will need to justify why the decision was unfair and how it caused actual harm.

The problem is not algorithms. The problem is the people training AI, and the people they are trained to imitate. The people training life-critical systems are college dropouts and JavaScript developers with something called a “nanodegree” in self-driving cars. If you’re lucky, maybe some of them have an actual degree in statistics or something. Whether out of ignorance or malice, the developers of AI will make mistakes in our headlong pursuit of autonomy. There needs to be a minimum of regulatory oversight for AI that can take a human life. Or at the very least, a certifying body for data scientists, like medical boards and bar associations. In an industry that embraces agility, the bureaucratic cost of these options is highly unattractive.

\(\begin{align} \large\textit{Past performance does not} \\ \large\textit{guarantee future results.}\\ \textsf{—}\href{https://www.sec.gov/news/press/2003-122.htm}{\textsf{Mandatory SEC Disclosure}} \textsf{ (2003)}\end{align}\)By far the thornier problem, is the data used to train AI. Most AIs today are trained on human data. Imperfect data used to predict the outcome of a scenario no human has exactly seen before, produced by conditions we can only hope are similar during operation. We take great pains to ensure that the data used to train AI comes from the same distribution we expect to see in future operation. There is an entire scientific discipline devoted to correctly sampling, cleaning, and preparing data for machine learning. However, data science departments have budgets limited by economics. And enough data to predict accurate results is seldom enough to prevent unwanted bias.

In order to prevent unwanted bias, we should understand where it comes from. “Bias” in the statistical sense is not a bad word. Bias is just a property of an estimate. A biased estimator is a function which tends to over- or under-estimate the value of a parameter. You can have a biased estimator that is still accurate. You can have a biased estimator that is inaccurate. You can have a biased estimator that is precise. And you can have one that is imprecise. Though you are seldom lucky enough to have an estimator that is perfectly unbiased and precise. A frequent compromise in machine learning is trading variance for bias. Completely eliminating bias often requires sacrificing precision, and high precision usually comes at the price of additional bias.2


Low Variance
(High Precision)
High Variance
(Low Precision)
Low Bias
Most Accurate
High Bias
Least Accurate


Suppose that a company has a slight bias towards hiring Asians for technical roles. The following words in this paragraph are completely hypothetical, although there is evidence to suggest Asians are disproportionately well-compensated in some professions. Asians are smart. Asians work hard. Smart companies want to hire smart, hard working people to build smarter products, and will pay them commensurately. Whether or not Asian bias exists, the results of hiring for diversity has higher variance, on a number of key performance metrics. Since hiring more Asians, productivity and profits have soared. As a business owner, what would you do?3


<?xml version=”1.0” encoding=”UTF-8” standalone=”no”?>

image/svg+xml - 3 - 2 - 1 1 2 3 - 3 - 2 - 1 Density 0.8 0.6 0.4 0.2 0.0 −5 −3 1 3 5 x 1.0 −1 0 2 4 −2 −4 0.7, μ = 0.3, μ = 0.0, μ = 2 0.2, σ = 2 1.0, σ = 2 5.0, σ =
Which would you prefer: low bias and high precision, or no bias and low(er) precision?


In many ways, bias is a valuable heuristic. It allows us to encode large amounts of categorical information and take action without shuffling mountains of data whenever we need to make a decision. Even when bias predicts falsely, the cost of a false positive may outweigh the cost of a false negative on average. Imagine a hiring bot rejects a West Samoan programmer, who is a genius-level engineer. While she would have made an excellent hire, if the metrics can’t properly estimate the value of a candidate, the safest strategy is to hire from a well-known population over one with unproven or uncertain quality.

Society thrives on social bias. It is the glue that keeps cultures and organizations intact, in the midst of sweeping globalization. Many have biases of athletic prowess, charismatic personality, or academic aptitude. We are comfortable with these biases, proud of our tolerance and sophistication in applying them. Some have biases of religious affiliation, ethnic background or physical similarity. We call these biases shameful, whilst secretly holding them ourselves. But now we can quantify bias, and the numbers do not paint a pretty picture. There will always be more and less accurate biases. But we must exercise caution. For bias, accurate or contrived, can have long-lasting effects on a population.

Human decision makers are prone to hundreds of cognitive biases. Whether relevant, or remotely accurate in any objective measure, bias can have negative and positive effects on a human population. The trouble with bias in prediction, is that any success is deceptive. Through luck or skill4, whenever someone is slightly successful at prediction, they invariably want to exploit their newfound predictive abilities. And exploitation is a whole new ballgame.

For most of ML, the training data is a given, often presumed to be representative of the data against which the prediction model will be deployed, but not much else. With a few notable exceptions, ML abstracts away from the data generating mechanism, and hence sees the data as raw material from which predictions are to be extracted. Indeed, machine learning generally lacks the vocabulary to capture the distinction between observational data and randomized data…

Most of the prediction literature assumes that predictions are made by a passive observer with no influence in the phenomenon. On the other hand, most prediction systems are used to make decisions about how to intervene in a phenomenon. Often, the assumption of non-influence is quite reasonable — say if we predict whether or not it will rain in order to determine if we should carry an umbrella. In this case, whether or not we decide to carry an umbrella clearly doesn’t affect the weather. But at other times, matters are less clear…

―Omkar Muralidharan, et al., Causality in machine learning (2017)

Exploitation requires a deep understanding of cause and effect, or deep reserves of luck. This subject is challenging for even the scientifically trained, and one of the toughest problems in AI. There is an outstanding body of work which explores the role of causality in machine learning. In short, the same predictive machinery that landed humans on the moon is surprisingly easy to derail. The same biases that impair our reasoning, make it easy to contaminate for example, AI models trained by humans, on human data, for other humans to interpret.

Long after machines are calling the shots, our heuristic biases will continue to deflect their ballistic trajectories. Although we may never eliminate human bias, we can ensure machines are less prone to statistical ones like selection and verification bias. Those training AI in a new domain should be asking themselves three important questions:

  1. Are we really measuring what we want to measure? (ie. Test validity)
    • Many advertisers try to maximize clicks. This is a loosing battle.
    • Objectives may change over the production lifetime of a model.
    • A poorly chosen objective can have unintended consequences.
  2. Is the training data accurate and free from hidden bias? (ie. Internal validity)
    • People constantly forget (or conveniently overlook) confounds.
    • If the data generator is biased, the model will encode its bias.
    • The method of sampling may have hidden biases.
  3. Does the training data generalize well in practice? (ie. External validity)
    • Maybe the training data has grown stale over time.
    • Maybe the model is missing data on some key demographic.
    • Maybe the true population is not the population we bargained for.

These questions are only clues in the search for unwanted bias. By eliminating sources of statistical bias, we can guarantee that human biases are apparent for what they are, not hiding as statistical errors in disguise. But we must not complacently check lists and perform t-tests. AI promises great predictive power, but with great power comes great responsibility. And responsible machine learning requires us to analyze human incentives, vigorously challenge our assumptions and constantly evaluate the validity of trained models.

Bias is an inextricable part of the world - every die is slightly loaded, every deck slightly stacked in one way or another. Machine learning just formalizes bias, blessing it under the auspices of validity. Bias can be found everywhere from the smallest neuron in a neural network, to the largest ensemble in a prize-winning Kaggle submission. Neural networks are essentially computing the sum of various biases - as the network sees more data, it updates the biases. Over time, some grow stronger and some grow weaker. Like their biological cousins, artificial neurons are coincidence detectors - they cannot distinguish association from causation. Intelligence is hard-wired to attribute association to causation, causing all manner of false rituals and superstitious behavior.

Fifty thousand years ago there were these three guys spread out across the plain and they each heard something rustling in the grass. The first one thought it was a tiger, and he ran like hell, and it was a tiger but the guy got away. The second one thought the rustling was a tiger and he ran like hell, but it was only the wind and his friends all laughed at him for being such a chickenshit. But the third guy thought it was only the wind, so he shrugged it off and the tiger had him for dinner. And the same thing happened a million times across ten thousand generations - and after a while everyone was seeing tigers in the grass even when there weren’t any tigers, because even chickenshits have more kids than corpses do. And from those humble beginnings we learn to see faces in the clouds and portents in the stars, to see agency in randomness, because natural selection favours the paranoid. Even here in the 21st century we can make people more honest just by scribbling a pair of eyes on the wall with a Sharpie. Even now we are wired to believe that unseen things are watching us.

―Peter Watts, Echopraxia (2015)

There will always be more and less accurate biases, depending on what we are measuring, and what outcome we hope to predict. Even so, machine learning will only reflect the data we show it, not the possible confounds from an unseen parameter, nor the possibilities if the world were a different place. If we do not probe these hypotheses, we are doomed to flee tigers in the grass, build monuments to false causes, and leave greatness in poverty. How many Lincolns were shot, or Einsteins put to the sword in the name of some false bias?

There is a dangerous course ahead on the path to autonomy. If we maintain the bias that only those who produce capital deserve to receive it, today’s economic inequality will grow exponentially worse. In the same way that genetic or geographic origin once predetermined a human’s social mobility, economic fitness will determine one’s financial security. In such a future, only those who are fast enough, smart enough, or wealthy enough to accelerate the pace of automation will enjoy the prosperity and wellbeing it guarantees. This is a heavy price to pay for a small increase in convenience and safety.

This is not a matter of social welfare. This is a matter of prolonging our competitive advantage against automation. By applying the same biases from evolutionary biology to education and employment, we are vastly underutilizing human potential. We should seek every opportunity to provide creative agency and professional opportunity to people from all walks of life. We should prefer deaf applicants to work in noisy environments, and let elderly people take leadership positions. We should empower those with mental and physical disabilities to find roles where they can utilize their skills effectively. We should incentivize high-performers in low-paying jobs from teaching to healthcare, and give them tools to become more effective at school and in the workplace.

Work in Progress


AI sometimes gets a bad rap. We often hear about AI in very competitive terms: whenever bots are not stealing our jobs or out-maneuvering us at chess, they are badly malfunctioning or causing some intrusion of privacy. And while it is important to maintain a healthy skepticism of the dangerous role that AI poses to our society, I think we will look back at many early applications as lacking sufficient imagination.

The Independents, rooted in the farms and small towns of the West, were innovators, but of a conceptual kind, not the technical kind à la Alexander Bell…They intuited that the telephone’s paramount value was not as a better version of the telegraph or a more efficient means of commerce, but as the first social technology…

Typically, the rural telephone systems were giant party lines, allowing a whole community to chat with or listen to one another. Obviously there was no privacy, but there were benefits to communal telephony other than secure person-to-person communications. Farmers would use the telephone lines to carry their own musical performances…

And so, while the Bell Company may have invented the telephone, it clearly didn’t perceive the full spectrum of its uses. This is such a common affliction that we might name it “founder’s myopia”. Again and again in the development of technology, full appreciation of an invention’s potential importance falls to others…

—Tim Wu, The Master Switch: The Rise and Fall of Information Empires

Technologists typically want to settle old scores. Greek astronomers invented primitive computers to foretell celestial movement, but never dreamt their descendants would use them to sail the heavens. The developers of microwave technology didn’t intend radar to heat food. If you have a general algorithm which learns to maximize reward by repeatedly choosing from a set of actions, then clearly you have an algorithm which prints money.5 But more importantly, if you have such a technology, then you have a new form of intelligent life. Why teach it to print paper, when we could be teaching it to write books?

The currency of the future is data, and the computational resources to mine it for insights. To say that money will vanish in the post-scarcity economy would be hubris, but printing money is just a side-effect, a parlor-trick compared the possible applications which AI affords. You might as well print paperclips and trade them for timeshares on the cloud. Our goal should be improving the lives of human beings, and if money is a necessary means to do so, then let’s print money. Although I suspect printing money is like handing out fish. If our goal is truly improving the lives of human beings rather than manufacturing consumers, we will need to give people a stable livelihood and meaningful ways to pursue happiness in the age of automation.

The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.

—J. C. R. Licklider, Man-Computer Symbiosis (1960)

One of the major limitations of user interfaces is bandwidth - keyboards and screens can only exchange so much information with their users. But today’s computers have the ability to interact with their environment in brilliant new ways. From self-flying drones to virtual patients, and home appliances to smart assistants, machines are becoming increasingly perceptive, and increasingly conversant. Machines can see, hear, and understand natural language. They can recognize faces and speech, anticipate our intentions and assist with increasingly sophisticated tasks. We call these capabilities “artificial intelligence”. But a more apt name might be “augmented intelligence”.

There are thousands of exciting AI applications, from tracking the spread of infectious diseases to detecting fake news, from brain-computer interfaces for coma patients, to automated hypothesis testing for scientists. Applications that are transforming our relationship with technology and improving the lives of billions of people on earth. The problems we encounter in AI are the same problems we have been struggling with for the greater part of the 20th century. Education. Equal opportunity. Employment. The democratization of technology. Solving these problems will require a more comprehensive view of AI, one that transcends just classification, prediction or automation.

Progress begins when we stop using algorithms to just predict people’s habits, and start teaching them new ones. When we give people a fighting chance, by retraining those whose jobs are threatened by automation. When we identify where humans show promise and teaching them how to improve. When we use AI to prevent diseases rather than designing drugs to treat symptoms. Progress begins when we start fixing broken systems rather than exploiting their weakness. If you want to contribute to the survival of our species, get out of the prediction game, and get into the business of making progress.

Discuss this post on HN!

Further Reading

Footnotes

  1. Arguably China’s most famous living sci-fi writer. 

  2. Pedro Domingos, A Few Useful Things to Know about Machine Learning

  3. I have no idea whether the prior assertions are true. But a careful reading will reveal no implied causal relationship between “bias”, “productivity” and “profit”. The point is, we can replace “bias”, “productivity” and “profit” with any variable X, Y and Z. You control X, and observe the effect on Y and Z. Suppose X is positively correlated with Y and Z, i.e. higher values of X correspond to frequently higher values of Y and Z. Lower values correspond with frequently lower Y and Z, with high variance for low values of X. If we want to maximize Z, what is our best strategy? 

  4. Although usually the former. 

  5. Many clever but unimaginative people contracted the same idea at once, so one’s chances of developing a predictive advantage, and their profits from exploiting it, are much slimmer.