Happy Employees == Happy ClientsCAREERS AT DEPT®
DEPT® Engineering BlogAI

Which jobs are the AIs coming for?

I want to walk through a framework I use to understand how the AI work I've done and supported over the years impacts my coworkers.

I want to walk through a framework I use to understand how the AI work I've done the years impacts my coworkers. I build these systems because the people I work with benefit from them, and in the main that's why other people build these systems. There are nefarious reasons why people would build AIs too. But most people won't see nefarious AIs, they'll see something built by someone like me to theoretically make their job easier.

This isn't a technical discussion, and I'm definitely not proposing an ethical framework. My framework is about the kinds of decisions AIs are designed to automate. What we expect out of intelligence is good judgment, which means good reasons and a valid conclusion. An artificial intelligence is subject to the same standards. We want a useful prediction, at least, and ideally good reasons too. This post is about how different jobs use different kinds of reasoning, and which kinds of reasoning are easier to automate.

We automate judgment when we think a machine can do it more reliably than a human. Judgments can be hard to make for a lot of reasons. Making the same three judgments about the same list of cases day after day is tedious and repetitive. Machines don't get bored and make the wrong choice. This is why we trust machines to make judgments: They won't get bored and/or creative at the critical moment of judgment. They aren't alive, and so they just don't care how boring or tedious the judgment is.

Humans are amazing generators of predictions and theories. We're also often unreliable, unpredictable and in my own experience we tire pretty easily. Machines get to finer precision, analyze data faster, and with signals humans simply can't read without a machine for help. When we automate video recommendations, for example, we can feed far more records of preferences into an AI than we could into a human. This doesn't make the AI better than a human at all of the kinds of judgments you might need to make a good choice. But the AI is way better at matching the preferences of people that meet certain criteria and finding suggestions you haven't seen. Even the best neighborhood video store employee could only manage preferences for their 1000-or-so most-regular patrons. While I find that skill admirable and you may think its a head full of useless trivia, either way its a fact a machine can now usually do that kind of recommendation better.

There are lots of species of judgment: Predictions, recommendations, explanations, forecasts, hypotheses and deductions are just a handful of examples. The study of good and bad reasoning is historically part of Logic, which identifies three kinds of reasoning: Inductive, Deductive and Abductive, each of which I'll describe below. Complex judgments like scientific theories, great works of art, or big software systems will use all three kinds of reasoning. Humans are good at fluidly moving back and forth between each kind of logic, plus probably a couple other kinds of inference we haven't spotted yet. AIs are built to simulate these kinds of reasoning using software. Every problem we use an AI for requires a mix of logics to create an output a human would agree is a good judgment, too.

For example, sometimes a problem may require a lot of inductive reasoning and a little deduction to reach a conclusion a human would agree is a good one. Other times the problem may be deeply abductive, and the simulation requires a whole lot of as-yet-undiscovered software voodoo to get a good judgment. Or the problem may be just flat-out deductive from start to finish.

I want to make a final point about judging hard and easy intellectual work using your own models of intelligence. For most of European history the supreme examples of "intelligence" were (a) axiomatic geometry, and (b) chess, followed closely by (c) conjugating Latin verbs. All three are excellent examples of deductive inference. More recently, standardized tests from the SATs to IQ became "measures of intelligence." Those tests also prioritize deductive reasoning skills. Its fair to say we hold deductive inference in high esteem. And we should; its hard. But its not always as hard as the other two known kinds of inference.

Any given inference problem may require more than one kind of reasoning. Deduction may be useless or useful in any given case, whether its considered the paradigm of good judgment by tenured professors of Logic or not. Inductive skills were once rejected by those professors as just being "good at guessing," and abduction was witchcraft or sorcery or madness or architecture. Consider how random that is as we review my work history.

Inductive Dave

Telemarketer

Someone makes an inductive argument when they claim a general statement is probably true because a lot of similar specific statements are also true. The classic example of an inductive argument is "The likelihood the sun will rise in the East tomorrow is 100% because the Sun has risen in the East every day of my life so far." Inductive arguments are good when you know how many relevant samples you need to win the argument, and you have that many or more. They're bad when you don't have enough.

One of the first jobs I had as an undergraduate student was working for a professional fundraiser. Charities and NGOs hire professional fundraisers to solicit the public for donations. In 1989, when I had this job, we called people on the phone.

Inductive Dave's job was to carefully rip a sheet of paper out of the phone book and call each person on the sheet until (1) they donated money or (2) they said no or (3) I called three times and got no answer. I dialed a number, read an approved script, and more-or-less answered any questions the person on the other end of the line might have. If they agreed to donate, I'd suggest some numbers, and then note their name and eventual agreed donation down and move on. If they said no, I'd make a note and move on. If they didn't answer, I'd make a note and move on. In the worst-case scenario the person on the other end got abusive, and I'd hand the call to my supervisor and move on.

In 1989 this work was all manual. Much of it has now been automated away completely. Computers dial the number, read the pitch, and based on the answer, respond accordingly. The computer can also securely process the donation: My coworkers often wrote the donation amount down wrong, somehow pocketed the money themselves, or just plain messed up the final piece of the transaction. We've also automated the lists, so people can opt-out or be subscribed to lists for certain charities. This ensures people who get calls are more likely to want them, and more likely to donate.

Inductive Dave really didn't like this job. For one thing, there's not a lot of creativity allowed. I would try reading the pitch in different accents or with different emphases but as the evening wears on the enjoyment is limited. Our assumption that three tries was enough to remove someone from the list was basically just a default. There were more than enough people to call and me and my coworkers often crossed people off after just one or two missed calls. And finally, it was emotionally hard. Some nights you'd have a run of people happy to donate, and other nights everyone you called was outraged.

Turnover was high. People lasted a few months and then tired of the tedium and emotional abuse. I learned to hate phone calls.

Jobs like Inductive Dave's are obvious candidates for an AI, and have been for decades now. They involve very simple inferences. A machine doesn't particularly care that some of the premises might have been presented in an abusive way, either.

Deductive Dave

Research Assistant

Someone makes a deductive argument when they claim a statement is true because that statement is a specific consequence of more general relevant statements already held to be true. The paradigm example is the syllogism, "All humans are mortal, and Socrates is a human, therefore Socrates is mortal." Generally deductions proceed from general statements to specific ones.

When I was an undergraduate a professor in the Philosophy department was writing a book on how scientific theories evolve, and he asked me to be his research assistant. My job was to read a large collection of books and articles and provide a weekly summary of the results.

Every week he'd give me a list of stuff to read that was relevant to a specific chapter. Some weeks he was worried about how biogeography had evolved to merge elements of biology and geology; other weeks he was more concerned with all the anxiety that Boyle's law and the steam pump caused early modern physicists. I'd go into the library, read the texts, and on Friday write him out a summary so we could discuss what I'd found. This job was a lot of fun, and I learned a lot.

Each science has its own set of terms, assumptions and expectations. I had to learn how the sentences fit together. My boss wanted to know that any conclusions I'd drawn could be assigned to some precursor premises, that I wasn't just making things up, and so I had to document what I found and where. He also wanted to know when terms, conclusions or assumptions crossed disciplines, whether they meant the same thing or changed meaning. There was a schedule too: A bibliography of a specific size, a set of chapters on specific sciences, and a timeline to submit the manuscript and/or the money would run out for research assistants.

The work was purely deductive. My contribution was to summarize the main points, identify key edge cases, and collect similarities. I was good at deductive inference. I also got tired easily, which is to say distracted by my girlfriend, my schoolwork, the state of the world and weekend ski trips into the Rockies. Sometimes I didn't get my summaries in until Monday, and sometimes they were sloppy and only rarely were they consistent from one week to the next.

This is precisely the problem a modern LLM is designed to fix. Professor Brown might have been much better off hiring an LLM instead of me to do the work he needed, but since it was 1988 nobody had enough compute.

Deductive Dave continued on after this job to get even better at deductive inference, studying logic in graduate school. He scored high on various standardized tests, as would be expected from someone with that much training in deductive inference. But by the time Deductive Dave gave up logic school in 1996, there were already AIs that could do a better job at several kinds of deductive inference.

Abductive Dave

Senior Consultant

Someone makes an abductive argument when they claim a statement is likely true because adding the statement to an existing set of truths makes them imply even more than they currently do. Abductive arguments are an inference to the best explanation. The classic abductive argument is Newton's inference of the laws of motion to predict a wide variety of gravitational effects, including apples falling off trees.

Abductive Dave's job is to help ensure the models junior consultants develop for the clients of the company he works for are big enough to last a couple of years.

Some of these models are actual machine-learning models designed to automate a sets of inductive judgments, usually for statistical predictions. Some of these models are data models, which reduce business entities to axioms that can be used to prove things about the company's business. And some of these models are expectations about how processes work at the client. In each case, Abductive Dave provides junior consultants with additional hypotheses that make their models more effective. My specific domain of expertise is in models, but my coworkers who design more traditional software systems also do the same kind of abductive work. That is, they make an inference to the best explanation even better by offering their engineers hypotheses about the software solution they're building that make the eventual system better.

Abductive inference can get repetitive. A friend of mine once suggested that you could replace all the architects with a list of heuristics everyone knows already and some suggestions for priority. At a certain level of abstraction, what Abductive Dave has to contribute is really mostly conventional wisdom.

Effective abductive inference also requires a good knowledge of the relevant filter criteria, to help choose the right explanations. You can find yourself on the left side of the Dunning-Kruger curve, at the top of Mt. Stupid, if you think your explanation is the best when its really kind of dumb. Abduction requires you know which hypotheses to reject and which are at least plausible. So far there's not much automated help.

So who's job is in danger?

The short answer is that everyone's job might be in danger. But it's complicated. We can't assume that because we think a kind of judgment is hard for a human that it'll be hard for a machine. Not just because we get tired and machines don't. Some kinds of logic are just easier to automate.

Inductive inferences are usually the easiest to automate. We use automated statistical inference everywhere to make reliable judgments, particularly predictions, from seatbelt quality testing to email marketing. Digital transformation projects always eventually replace some manual inductive inferences with automated ones. But even after Inductive Dave watches his job get automated, there's always another set of inductive inferences waiting for his manual effort.

Excellence in deductive inference was thought until recently to be the hallmark of intelligence. The three main achievements of modern AI - beating Chess and Go grandmasters, automated mathematical theorem proving, and passing the SATs - mean that what we thought was hard turns out to be something a robot can do. What room does that leave for human deductive excellence? If all the deductive work can now be done by robots, what can someone who scores high on IQ tests or plays great chess contribute? A lot of the angst caused by modern AI is because the people with great deductive skills have now discovered they're not so special. But while Deductive Dave is threatened by the new LLMs, he's still persuaded he's the smartest one in the room because of his deductive skills.

Good abduction requires a mix of induction, deduction and abductive inferences, iterated over time. LLMs like ChatGPT can simulate abduction but only so far: The inferences an LLM draws from its document corpus can look like the AI is adding new truths to the stock of existing ones, and inferring better explanations. But the AI is still only deriving specific conclusions from the general premises contained or implied in it's document source. This can simulate abduction close-enough for some applications. LLMs might prove useful for Abductive Dave, if they can be tuned to point to gaps in junior consultant models, for example. So far, however, the iterative process of making good models better still uses manual judgments.

Whichever Dave's judgment is simulated, not everything in that argument can be automated. We'd do better to remember that, and be more precise about the work we think a machine should do and the work a human should.