Job security

There’s an old joke about how you can distinguish between theoretical and practical philosophy: if your degree is in practical philosophy, there are practically no jobs available for you, whereas if it’s in theoretical philosophy, it’s not even theoretically possible for you to find a job. I was reminded of this the other day when I was having a lunchtime chat with a colleague who had recently learned of the existence of a vending machine that bakes and dispenses pizzas on request. From this the conversation moved to the broader theme of machines, and particularly artificial intelligence, taking over jobs that previously only humans could perform, such as those that involve designing artefacts.

A specific job that my colleague brought up was architect: how far away are we from the situation where you can just tell an AI to design a building for a given purpose within given parameters and a complete set of plans will come out? This example is interesting, because in architecture – in some architecture at any rate – engineering meets art: the outcome of the process represents a synthesis of practical problem-solving and creative expression, functionality and beauty. Algorithms are good at exploring solution spaces for quantifiable problems, but quantifying the qualities that a work of art is traditionally expected to exhibit is challenging to say the least. Granted, it’s a bit of a cliché, but how exactly does one measure something as abstract as beauty or elegance?

If we follow this train of thought to its logical conclusion, then it would seem that the last jobs to go would be the ones driven entirely by self-expression: painter, sculptor, writer, composer, actor, singer, comedian… Athlete, too – we still want to see humans perform feats of strength, speed and skill even though a robot could easily outdo the best of us at many of them. In a sense, these might be the only jobs that never can be completely taken over by machines, because potentially every human individual has something totally unique to express (unless we eventually give up our individuality altogether and meld into some kind of collective superconsciousness). However, it’s debatable if the concept of a job would any longer have a recognisable meaning in the kind of post-scarcity utopia seemingly implied by this scenario.

Coming back closer to the present day and my own research on collaborative knowledge discovery, I have actually given some (semi-)serious thought to the idea that one day, perhaps in the not-too-far future, some of the partners in your collaboration may be AI agents instead of human experts. As AIs become capable of handling more and more complex tasks independently, the role of humans in the process shifts toward the determination of what tasks need doing in the first place. Applying AI in the future may therefore be less like engineering and more like management, requiring a skill set that’s rather different from the one required today.

So what do managers do? For one thing, they take responsibility for decisions. Why is this relevant? The case of self-driving cars comes to mind. From a purely utilitarian perspective, autopilots should replace human drivers as soon as it can be shown beyond reasonable doubt that they would make roads safer, but while the possibility remains that an autopilot will make a bad call leading to damage or injury, there are other points of view to consider. Being on the road is always a risk, and it seems to me that our acceptance of that risk is at least partially based on an understanding of the behaviour of the other people we share the road with – a kind of informed consent, so to speak. If an increasing percentage of those other people is replaced by AIs whose decision-making processes may differ radically from those of human drivers, does there come a point where we no longer understand the nature of the risk well enough for our consent to be genuinely informed? Would people prefer a risk that’s statistically higher if they feel more confident about their ability to manage it?

On the other side of the responsibility equation there is the question of who is in fact liable when something bad happens. When it’s all humans making the decisions, we have established processes for finding this out, but things get more complicated when there’s algorithmic decision-making involved, and I would assume that the more severe the damage, the less happy people are going to be to accept a conclusion that nobody’s liable because it was the algorithm’s fault and you can’t prosecute an algorithm. In response to these concerns, the concepts of algorithmic transparency and accountability have been introduced, elements of which can already be seen in enacted or proposed legislation such as the GDPR and the U.S. Algorithmic Accountability Act.

This might seem to be pointing toward a rather bleak future where the only “serious” professional role left for humans is taking the blame when something goes wrong, but I’m more hopeful than that. What else do managers do? They set goals, and I would argue that in a human society this is something that only humans can do, no matter how advanced the technology we have at our disposal for pursuing those goals, because it’s a matter of values, not means. Similarly, it’s ultimately determined by human values whether a given course of action, no matter how effective it would be in achieving a goal, is ethically permissible. In science, for example, we may eventually reach a point where an AI, given a research question, is capable of designing experiments, carrying them out and evaluating the results all by itself, but this still leaves vacancies for people whose job it is to decide what questions are worth asking and how far we are willing to go to get the answers.

Perhaps it’s the philosophers who will have the last laugh after all?