After a three-week summer holiday, I returned to work last Monday. I say “returned to work”, but what I actually did was hop on a train and travel to Turku to attend the Ethicomp 2022 conference at the School of Economics. After two and a half days of hard conferencing, I departed for Oulu on Thursday afternoon, leaving only Friday as a “normal” workday before the weekend. I can imagine, and have in fact experienced, much worse ways to come back after a vacation!
I felt more anxious than usual about my own presentation, scheduled for late afternoon on the first day. This was partially because I like to prepare and rehearse my presentations well in advance, but this time I hadn’t had time to finish my slides before my vacation nor an inclination to work on them during it, so I more or less put my deck together on the train and then rehearsed the talk in my hotel room. On Tuesday I skipped the session immediately before mine to flick through my slides a few more times and make some last-minute tweaks, and I eventually emerged from my mental cocoon reasonably confident that I would get through the whole thing without stumbling.
I still wasn’t that confident about how the presentation would be received, because the paper I was presenting is probably the strangest one I’ve written to date. Long story short, one day I was preparing materials for the introductory lecture of the AI ethics course and explaining the concepts of moral agency (the status of having moral obligations) and patiency (the status of being the subject of moral concerns). Artificial things are traditionally excluded from both categories, but there is an ongoing debate in philosophy of AI about whether a sufficiently advanced AI system could qualify as a moral agent and/or patient.
The idea that struck me was that if we let go of (organic) life as an analogy and view AI systems as cultural artifacts instead, we can sidestep the whole debate on whether AI can become sentient/conscious/whatever and make the moral patiency question a good deal more relevant to practical AI ethics in the here and now. After all, many people feel sad when an artifact of great cultural significance is destroyed (think Notre-Dame de Paris), and downright outraged if the destruction is wilful (think the Buddhas of Bamiyan), so it doesn’t seem too much of a stretch to argue that such artifacts have at least something closely related to moral patiency. Could an AI system also qualify as such an artifact? I filed the question in my brain under “ideas to come back to at an opportune moment”.
The moment came in January: I wasn’t terribly busy with anything else right after the holidays, Ethicomp had a call for papers open and I only needed to write a 1500-word extended abstract to pitch my idea. I did wonder if it might be a bit too outlandish, which in retrospect was silly of me, I suppose – philosophers love outlandish ideas! The reviews were in fact fairly enthusiastic, and in the end my presentation at the conference was also well received. I was able to have some fun with it even, which is not something I often manage with my conference talks, and I soon got over my nagging feeling of being an impostor, a lowly computer scientist who arrogantly thinks he’s qualified to talk philosophy.
In retrospect, I also have to say I did manage to turn that extended abstract into a pretty well written full paper! It’s not officially published yet, but it argues that 1) yes, AI systems can be artifacts of considerable cultural significance and therefore intrinsically worthy of preservation, 2) they constitute a category of artifact that cannot be subsumed under a broader category without losing essential information about their special nature, and 3) this special nature should be taken into account when deciding how to preserve them. The argumentation is fairly informal, relying largely on intuition and analogy, but I’m quite proud of the way it’s built and presented nonetheless. Sure, the paper is only tangentially related to my daily work and is likely to be a total one-off, but even the one-offs can sometimes have a bigger impact than you’d expect – there’s another one of mine, also an ethics paper, that was published 15 years ago but is still getting citations.
Apart from surviving my own presentation, for me the highlight of the first day, and indeed the whole conference, was the keynote Scaling Responsible Innovation by Johnny Søraker. I’d met Johnny before on a couple of occasions, originally at the ECAP 2006 conference in Trondheim where he was one of the organisers, but hadn’t seen him for ages. Turns out he’s now working as an AI ethicist for Google, which the more cynically minded among us might remark sounds like a contradiction in terms, but be that as it may, he gave an insightful and entertaining talk on the challenges faced by SMEs wanting to do responsible innovation and how they can address those challenges. I particularly liked the idea of having an “interrupt”: someone who is kept informed of everything going on in the company and has been trained to spot potential ethics issues. The obvious advantage is that it doesn’t matter how convoluted or ad-hoc the innovation process is – as long as there is this one node through which everything passes at some point, risks can be identified at that point and brought to the attention of someone qualified to make decisions on how to mitigate them.
Among the regular presentations there were several AI-related ones that I found very interesting. The one that resonated with me the most was Sara Blanco’s talk, in which she criticised what might be called a naive, “one-size-fits-all” conception of AI explainability and argued for a more nuanced one that acknowledges the need to account for differences in background knowledge and prior beliefs in the formulation of explanations. In light of my recent exposure to constructivist theories of learning, which likewise emphasise the effect of the learner’s existing knowledge structures on the process of integrating new knowledge into those structures, this made a great deal of sense to me. Outside the realm of AI, I very much enjoyed Reuben Kirkham’s talk on the impact on academic freedom of the unusual relationship between academia and industry in computer science, as well as Michael Kirkpatrick’s on the problematic nature of direct-to-consumer genomic testing services such as 23andMe, something I’ve brought up myself in my data ethics lectures.
The social programme was top notch too. On Wednesday evening we were first treated to a glass of sparkling and some live classical music at the Sibelius Museum, where we had about an hour to roam and explore the collections, which even included some instruments for visitors to try out – I couldn’t resist having a go on the Hammond organ, of course. After this we enjoyed a very tasty three-course dinner, with more live music, at restaurant Grädda next door. From the restaurant we proceeded to a pub for more drinks and chats, and when the pub closed, some of my fellow delegates went to find another one to have a nightcap in, but by that point I was quite ready for bed myself so I headed straight to my hotel.
This was my first Ethicomp conference, but I certainly hope it wasn’t my last. I’ve always found philosophy conferences highly stimulating, as well as welcoming to people of diverse academic backgrounds, so despite my anxieties, me not being a “proper” philosopher has never been a real issue. After CEPE 2009 I more or less lost touch with the tech ethics community for a whole decade, but recently I’ve been sort of working my way back in: first there was the special session at IEEE CEC 2019, then Tethics 2021, and now this. Ethicomp in particular is apparently the one that everyone in the ethics of computing community wants to go to, and having now been there myself, I can see why. The next one will be in 2024, so I guess I have about a year and a half to come up with another weird-but-compelling idea?