The world is a stage

Another November, another Tethics! This year, the conference was hosted by the University of Vaasa and co-chaired by Ville Vakkuri, who has appeared several times on my AI ethics course as a guest lecturer. As usual, there were a bunch of other familiar faces as well, so in terms of social interaction, the conference was a nice mix of catching up with old acquaintances and getting to know new ones. Vaasa itself was a new acquaintance for me and quite a lovely one at that, insofar as any Finnish city in November can be described as “lovely”. On and near the university campus there were some cool old red-brick industrial buildings, reminiscent of the Finlayson Area in Tampere.

The conference program had a couple of new elements this year. On the first morning, there was a workshop with three papers that the participants were invited to help improve, but I decided to skip it, because I had my own talk in the first regular session that afternoon and wanted to do some rehearsing. The other new thing was a poster session, which followed immediately after I’d given my presentation, and I ended up chatting for a good while with a doctoral candidate from the University of Turku who’s researching the ethics of autonomous weapons, a topic I’ve had some involvement with since my talk in the Seminar on the Art of Cyber Warfare a year ago.

I also touched upon the subject in my own paper, which had the somewhat provocative title “Death by AI: A Survey of the Literature and Known Incidents”. I’ll write about it in more detail once it’s been officially published – which may well be next year, the CEUR-WS process tends to take its time apparently – but in a nutshell, I searched for academic literature associating AI with death, did the same for fatal AI incidents recorded in public databases, analysed the search results to see what sort of themes emerge from them and put the analyses next to each other to see if there are any interesting observations to be made. Not the most rigorous piece of research out there, but it seemed to engage the audience, and it certainly gave me a lot of ideas for future work.

My session was preceded by a keynote speech by Rachael Garrett, which I have to admit went a bit over my head at times, but I did find her experiments with dancers improvising with robots rather cool. The last session of the day had two papers about AI in education, so that of course was right up my alley. On the second day there was some more interesting AI stuff, a particular highlight being a paper on the perpetuation of gender stereotypes by AI image generators; as a bonus, I got to witness the first-ever use (in my career at least, if not the entire history of academia) of the phrase “slay queen” to comment on a conference presentation. In the afternoon there was a town hall meeting, where it was decided that the next Tethics will be organised by LUT University in Lahti (yay!), and then it was time for Kai Kimppa to conclude the programme with his keynote on the past, present and future of IT ethics research in Finland. Kai’s speech made for a very enjoyable end to the conference, and not just because I got name-checked as one of the “new generation” of Finnish IT ethicists!

The week before the conference I got some exciting news: my docentship application received the rector’s seal of approval, so as of the first of December I’m officially a docent of AI ethics and data ethics in the Faculty or Information Technology and Electrical Engineering, University of Oulu. Feels pretty good! This came just in time for me to put the title in my CV for the winter call of the Research Council of Finland, which closed on the 12th. For a variety of reasons, I didn’t have a whole lot of time and energy to spend on my proposal, so I ended up submitting essentially the same one as last year, with some minor revisions to the research plan and a slightly fuller CV. I suppose I can view this as an experiment of sorts – will be interesting to see how the evaluator statements compare to the ones I received this year.

At work, things are now starting to calm down a bit towards the end-of-year holidays, but meanwhile, in the world of performing arts it’s getting busy. Last week we had the first proper rehearsals for the Ovllá opera: not just the chorus but director, soloists, conductor, rehearsal pianist, the works. This week there have been no rehearsals, but next week we’re bringing A Christmas Carol back to the stage, and the week after that the opera rehearsals will resume. I also recently got the notification that I’ve been selected into the choir for Beyond the Sky, so I’m three for three for the big 2026 productions I auditioned for back in May.

Working on the opera is an interesting experience, different from The Magic Flute in a couple of major respects. A rather obvious one is that instead of staging one of the most popular operas ever for the nth time, we are now creating something totally new, to be presented to the world for the very first time right here in Oulu. It’s an exciting thought, but at the same time, I’m very much aware that it’s hardly a safe bet. Will it bring in the crowds, not just the hardcore opera lovers? Not that I’ll have to answer to anyone if it doesn’t, but I do feel like I have my own tiny share of artistic ownership of the production and naturally I’m hoping that it will be a success.

The other big difference is in the level of cultural sensitivity constantly present at the rehearsals. The score brings together two very different musical traditions, and the libretto deals with some rather delicate themes; The Magic Flute has its own issues, sure, but at its heart it’s just a silly fairytale set in a fantasy world. With Ovllá, distancing ourselves from the story and dialogue is not an option, and it’s been clear from the get-go that the portrayal of the Sámi people and Sámi culture must be accurate and respectful. To that end, almost everyone in the design team is Sámi, as are the soloists playing Sámi characters.

That said, the rehearsals have been great fun as well as educational. I was already loving the music, and now that we’re starting to get an idea of what the opera is going to look like on stage, I’m getting properly stoked about it. I know already that there will be days when I’ll come home from work and dearly wish I could spend the evening on the couch instead of going to the theatre, but all things considered, a job in academia where the hours are very flexible is probably one of the easier ones to combine with a hobby like this. Besides, performing to an audience and working in a multicultural environment are surely skills that transfer both ways. Highly recommended!

What is an AI vulnerability anyway?

The proceedings of the Tethics 2024 conference have now been published in the CEUR-WS series, and with that, the paper “What Is an AI Vulnerability, and Why Should We Care? Unpacking the Relationship Between AI Security and AI Ethics” by myself and Kimmo Halunen. There was a bit of an emergency regarding the publication during the Christmas break, as CEUR-WS now requires all published papers to include a declaration of whether and how generative AI tools were used in the preparation of the manuscript, and one of the editors was in quite a hurry to collect this information from the authors. Luckily, I happened to see the editor’s email just a few hours after it came, and since we hadn’t used any AI tools to write the paper, the declaration was easy enough to complete. 

As the title implies, the paper examines the concepts of AI vulnerability and security, looking at how they are understood in the context of AI ethics. As it turns out, they are rather vaguely defined, with no clear consensus within the AI ethics community on what counts as an AI vulnerability and what it means for an AI system to be secure against malicious activity. Collections of AI ethics principles generally recognise the importance of security, but do not agree on whether it should be considered a principle in its own right or rather a component of a more generic principle such as non-maleficence. 

One thing that is quite clear is that the way security is viewed by the AI ethics community differs considerably from the view of the traditional cybersecurity community. For one thing, in the latter there is much less ambiguity on the definition of concepts such as vulnerability, but more fundamentally, the two communities have somewhat different ideas of what the role of security is in the first place. One could say that traditionally, security is about protecting the assets of the deployer of a given system, whereas for ethicists, it’s about protecting the rights of individuals affected by the system; an oversimplification, but one that sheds some light on why the concept of AI vulnerability seems so elusive. 

One consequence of this elusive nature is that it’s difficult to accurately gauge the actual real-world impact of AI vulnerabilities as opposed to hypothetical worst-case scenarios. Much of the paper deals with this issue, discussing the results of a study where I looked for reports of AI vulnerabilities that satisfy four inclusion criteria: there must be a documented incident, it must involve deliberate exploitation of a weakness in an AI system, it must have resulted in demonstrable real-world harm, and the exploited vulnerability must be specifically in an AI component of the system. When I searched six different public databases for such reports, I found a grand total of about 40 entries that could be considered at least partially relevant and only six that were fully relevant. 

This is hardly likely to be the whole picture, and the paper discusses a number of factors that may account for the poor yield to a varying degree. On the other hand, incomplete and biased as the results probably are, they may at least be taken to give a rough but realistic idea of the magnitude of the problem. Silver lining? Perhaps, but it’s only a matter of time before the problem grows from a curiosity into something more serious, and it doesn’t exactly help if we don’t have a decent database for collecting information about AI vulnerabilities, or even a clear enough definition of the concept to enable the development of such a database. 

To be fair, the relationship between security and ethics is not as straightforward as it might seem, at least not when it comes to AI. Security is an important ethics requirement for sure, but it may also be at odds with AI ethics principles such as explainability. Another possible complication is conflicting stakeholder interests; an interesting example of this is the case of Nightshade, a method that artists can use to counter the unauthorised use of their works for the training of text-to-image generative AI models. Technically, this is a data poisoning attack exploiting a vulnerability in the training algorithm, but it’s hard to argue that the artist is doing anything legally or morally wrong here. This serves nicely as a demonstration of why we can’t talk about the security of AI systems without considering the sociotechnical context in which those systems exist in the real world. 

In the category of things that gave me stress during the holidays, submitting the generative AI declaration for the paper was a trivial annoyance in comparison with the winter call of the Research Council of Finland, the submission deadline of which was set on the 8th of January. My application was already looking pretty good when I signed off for Christmas, and for the parts I hadn’t yet completed I was able to reuse quite a lot of material from my previous application, but even so, I was so anxious about the deadline that I went back to work for a few hours already on New Year’s Day. In the end, I made the submission with a good 24 hours to spare, but I have a feeling that the Council will be getting a substantial amount of feedback on the call timetable this year. 

On the performing arts front, I did two shows of A Christmas Carol last week, with two more to go in February. Several people who have seen me perform have remarked on how much I seem to be enjoying myself on stage – I really am, and I’m glad it shows! Meanwhile, Cassiopeia is busy rehearsing for a series of three concerts with the Kipinät choir from Jyväskylä in mid-March, and later in the spring we’ll be traveling to Linköping, Sweden for the Nordic Student Singers’ Summit. 2026 is also looking potentially very interesting already: Oulu will be one of the European Capitals of Culture, and one of the highlights of the year will be a brand-new opera composed and produced for the occasion. So far, there’s very little information available on who will be performing, but if there’s a call for chorus singers, I’ll definitely be putting my hand up. 

Mission accomplished

The mission being my university pedagogy studies. Yep, I’m now officially done – the final grade for the final part, the teaching practice, was awarded today. I know it’s just the basic studies, but it almost feels like I’ve completed a whole degree. In the concluding seminar four weeks ago, the first in-class assignment was to choose one from a set of cards with pictures of works of art on them and tell everyone else why that one; I went straight for The Garden of Death by Hugo Simberg because frankly, I was feeling pretty dead from basically being in high gear all spring, but there was also some more positive symbolism of planting and growth there. In any case, I’m not going to even consider the possibility of intermediate studies until I’ve taken a gap year.

The ethics course is more or less a wrap, although there are still a few students with some assignments missing. It’s another record year for the course, with 50 registrations and almost 30 completions, around ten more than last year. Partly because of the record numbers, I wasn’t able to keep to the formative assessment schedule I was aiming for, where each learning assignment would have been assessed before the next one is due. There were other issues with the assignments as well – the new format I tried this year was a step forward, but it’s clear that there’s still plenty of room for improvement in terms of reducing the potential gains from using generative AI as a substitute for thinking and learning.

Overall, however, I would say that the teaching practice was a success. The experiments I carried out produced useful data and experience on how to integrate AI tools in various ways into the teaching of AI ethics, and my debating chatbot experiment in particular yielded some very interesting research material. There’s a blog post coming out at some point where I discuss the teaching practice in more detail, and later hopefully also a peer-reviewed publication or two, once I’ve had the time to properly analyse the data and write up the results.

The spring in general has been a mixed bag, with some efforts successful, some not so much. I applied for two big things – a university lecturer position and a Research Council of Finland grant – neither of which I got. On the other hand, I’ve had a series of speaking engagements at various events that all went perfectly well as far as I can tell. I particularly enjoyed the most recent one, an online seminar titled Ethics of AI Hype, where I did my best to put the current generative AI boom into perspective. Truth be told, I’ll jump at any chance to talk gratuitously about the history of computing, but I do also believe that it doesn’t hurt to be reminded of the decades of AI research that took place before anyone had ever heard of such a thing as a large language model.

One event that I can describe with total confidence as a resounding success was the 45th anniversary concert of Cassiopeia. What a privilege it is to be in a choir that’s so skilled and versatile, and such a wonderful community to boot! In a single concert you may hear anything from pop hits to a Cree musical prayer to Mother Earth and from video game themes to a ten-minute-long modern composition commemorating the victims of the MS Estonia disaster. The cherry on top was that the anniversary celebrations coincided almost to the day with my own 45th birthday, so alongside the choir’s milestone, I got to celebrate a personal one in style.

The latest bit of good news (apart from the official conclusion of the pedagogy studies) came just a few days ago: a paper I submitted to this year’s Tethics conference got accepted! Should be a great experience once again; although the location has changed from Turku to Tampere, many of the same people are still involved in one way or another, so I’m looking forward to seeing plenty of familiar faces and catching up with their owners. Also accepted was a proposal for a workshop on tech ethics education, with Ville Vakkuri, Kai-Kristian Kemell, Tero Vartiainen and myself running the show, so I’ll be doing double duty this year, which I don’t mind at all. The reviewers’ suggestions for improving the paper were nothing major and the original camera-ready deadline of June 30 has been pushed back to August 11, so I think I’ll just let it be until after my vacation. The beginning of which, by the way, is barely more than a week away now!

All of the music, all of the magic

The conference proceedings of Tethics 2023 is out now, including the paper I co-authored – always a pleasant feeling to see your work in its final published form. Interestingly, this year the number of papers submitted for review was given in the preface, which I believe hadn’t been the case previously. Turns out the number was 26, so with 13 papers accepted for publication, the acceptance rate was exactly 50%. Nice to know that despite the small scale of the conference, getting accepted wasn’t a foregone conclusion!

The other papers I’ve had in the works recently have not, alas, been so well received. A journal manuscript to which I made a small contribution came back with a “major revision” verdict – with one of the reviewers being, frankly, rather vague and unhelpful – and another in which I’m the sole author got flat out rejected based on input from just one reviewer, which I wasn’t aware could even happen. Granted, the journal I submitted to is outside my usual field, so perhaps the culture is different there, but I would have thought that it would be standard practice in any field to get two reviews minimum. Maybe a second opinion wouldn’t have swayed the editor’s decision – the single reviewer’s criticisms were mostly fair, I suppose, although there were some misunderstandings – but at least I would have felt better about the process.

Oh well, no point in complaining, better divert that energy to figuring out what to do next with the manuscript. I’m leaning toward submitting it to another journal more or less as is, although maybe I’ll need to change the angle a bit, depending on the journal. I haven’t decided on a target yet or even made a shortlist of potential ones, but probably I’ll go with something closer to home this time. I suppose it’s always an issue when you do cross-disciplinary work that it may not be easy to find a publication channel where it fits in naturally.

Another question lacking a definitive answer is exactly when I’m going to be able to do whatever it is that I’ll end up doing with that manuscript. I’d love to have it revised and submitted before the holidays, but with the start of the Christmas break barely over a week away, I very much doubt the realism of that wish. In theory, it would be doable, given that the usual end-of-term flood of exam papers to be marked has dwindled to a trickle, but in practice, I’m too stressed about a couple of other things, namely my university pedagogy studies and my (so far notional) application to the Research Council of Finland.

That’s right, the Academy of Finland has made some changes recently: its official English name is now the Research Council of Finland, and the former September call for applications has been moved to January. Presumably the net impact of both of these on my life is approximately neutral, but I felt like I should mention them all the same. Anyway, I have my Academy Research Fellow application from last year that I should be able to repackage as an Academy Project application without revising the topic or approach in a fundamental way, so hopefully this round will be somewhat easier than some others I can think of.

Meanwhile, the choir had its traditional Christmas concert last Saturday – I got to sing my very first solo with Cassiopeia! – but that was by no means our final performance of the year. Tonight and tomorrow we are doing something rather special with Oulu Sinfonia: two screenings of Chris Columbus’s festive classic Home Alone with the musical score played live. There isn’t a whole lot of singing to do – all of it is in the second half of the film, and much of it is just for the sopranos and altos, who get to play the role of the children’s choir in the church scene – and initially I wasn’t terribly excited about the whole thing, but that changed on Tuesday when we had our first rehearsal with the conductor. Mr Gabriele turned out to be so full of enthusiasm and so good at working with singers that it was an absolute delight to rehearse with him and I’m now actually pretty hyped about the performances. Bring on the Wet Bandits!

I guess that wraps it up for the blog this year. Usually I have at least the days between Christmas and New Year as time off, but this year I’ll be back at “work” already on the 28th, when the process of getting ready for the new run of The Magic Flute kicks off for real. There are only seven rehearsals scheduled for the chorus before opening night, and that includes the dress rehearsal when we are already going to have an audience in the house, but after the two preliminary ones we had in November, I’m already quite confident. It’s frankly amazing how not just the music but also all the stage action had stuck with me through all the idle time since the last performance in February, but I guess that’s what repetition after repetition after repetition will eventually do to you. The one thing I’m not so sure about (for a number of reasons) is the opening scene choreography, but at least there’s something to keep me from getting cocky!

Text, drugs and rock ‘n’ roll: Tethics 2023 and beyond

Well, that’s it for Tethics 2023! I find myself struggling to accept that this was only the second “proper” one I’ve attended: my first one, in 2020, was an all-online event (for obvious reasons), and in 2022 there was no Tethics because Turku was hosting Ethicomp instead. Despite all that, I want to say that I’ve been going to the conference for years, because it just feels right somehow. I suppose you could take it as a testament to the cosy and welcoming atmosphere of the conference that I feel so at home there.

Certainly there’s something to be said for a conference where you can realistically exchange at least a few words with every fellow delegate over the course of a couple of days. (Not that I ever actually do, mingling not being my strongest suit, but in principle I could have.) I’m pretty sure I’ve commented before on the cultural differences I’ve observed between technical and philosophical conferences, but it’s worth reiterating how much more rewarding it is to attend a conference when there’s a genuine and lively discussion about every presentation. Out of all the conferences I’ve ever been to, Tethics is actually a strong candidate for being closest to ideal in that besides having that culture of debate, it’s small enough that you can fit everyone in a regular-sized classroom, and there are people there representing different disciplines and sectors so you get a nice range of diverse viewpoints in the discussion.

The keynote address of the conference was delivered by Olivia Gambelin, founder and CEO of an AI ethics consulting company called Ethical Intelligence. I very much enjoyed her talk, which dealt with the differences between risk-oriented and innovation-oriented approaches to AI ethics and how it’s not about choosing one or the other but about finding the right balance between the two. I particularly liked her characterisation of the traits of ethical AI systems – fairness, transparency etc. – as AI virtues, and the idea that good AI (or indeed any good technology) should, above all, boost human virtues as opposed to capitalising on our vices. My inner cynic can’t help but wonder if there’s enough money in that for virtuous AI to become mainstream, but I’m not ready to give up on humanity just yet.

Among the regular presentations, there were also several that were somehow related to AI ethics, which I of course appreciated, since I’m always on the lookout for new ideas and perspectives in that area. However, the two that most caught my attention were actually both in the category of “now for something completely different”. On the first day, Ville Malinen spoke on the sustainability and public image of sim racing, which occupies its own little niche in the world of sports, related to but distinct from both real-world motor racing and other esports. On the second day, in the last session I was able to attend before I had to go catch my train home, J. Tuomas Harviainen presented a fascinating – as well as rather surprising – case where he and his colleagues had received a dataset of some three million posts from a dark web drug marketplace and faced the problem of how to anonymise it so that it could be safely archived in a research data repository.

Another highlight was my own paper – and I can say this with at least some degree of objectivity, since my own involvement in both the writing and the presentation was relatively small. Taylor Richmond, who was my master’s student and also worked as my assistant for a while, wrote the manuscript at my suggestion, based on the research she did for her M.Sc. thesis. She then got and accepted a job offer from industry, and I figured that it would be up to me to present the paper at the conference, but to my delight and surprise, she insisted on going there to present it herself, even at her own expense. I offered some advice on how to prepare the presentation and some feedback on her slides, but all of the real work was done by her, leaving me free to enjoy the most low-stress conference I’ve ever attended.

The paper itself explores content feed swapping as a potential way of mitigating the harmful effects of filter bubbles on social media platforms. Taylor proposed a concept where a user can click a button to temporarily switch to seeing the feed of the user with the least similar preferences to theirs, exposing them to a radically different view of the world. To test the concept, she carried out an experiment where ten volunteers spent some time browsing a simulated social media platform and answered a survey. The results showed that the feed swap increased the users’ awareness of bias without having a negative impact on their engagement, the latter being a rather crucial consideration if real-world social media companies are to even consider adding such a functionality to their applications. Despite some obvious limitations, it was a seriously impressive effort, as noted by several conference delegates besides me: she designed the experiment, created the social media simulation and analysed the data all by herself, and she did a fine job with the presentation as well. My own contribution, apart from my supervisory role, was basically that I wrote some framing text to help sell the subject matter of the paper to the tech ethics crowd.

Also on the agenda this year was a special session on the future of the Tethics conference. The Future Ethics research group at the Turku School of Economics, which has organised every event so far, is apparently not in a position to commit to doing it again next year, so there was a discussion on finding an alternative host, with Tampere University emerging as the most likely candidate. As much as I’ve enjoyed all of my visits to Turku, I’d certainly appreciate the two hours that this would slice off my one-way travel time! There was also some talk about possibly going more international – attracting more participants from outside the Nordic countries, perhaps hosting the conference outside Finland at some point in the future – but there was a general consensus that in any case the event should remain relatively small and affordable to retain its essence. Personally, I quite like the idea that Oulu could be the host some year, although I don’t know how many others there are here who’d be on board with that.

In the meantime, my top two professional priorities right now are getting more focused on research (with a whole bunch of distractions now happily out of the way) and finishing my university pedagogy studies. It might seem like these are more or less diametrically opposed to one another, but thankfully that’s not the case: I can see potential in both of the remaining courses – teaching practice and research-based teacherhood – for advancing my research interests as well as my pedagogical knowledge. I have a couple of journal manuscripts in the works, one recently submitted and the other undergoing revisions, and I’m involved in a cybersecurity-themed research project where I’ve been looking into AI vulnerabilities from an AI ethics perspective. I’m sure the next distraction is waiting to pounce on me just around the corner, but until it does, I’m going to indulge myself and pretend that I have no work duties other than thinking deep thoughts and making sense of the world.

As usual, there are things happening on the music front as well. The choir currently has its sights set firmly on two big Christmastime projects, but there’s been time for a variety of smaller performances too; a particularly memorable occasion was singing Sogno di Volare, the theme song of the video game Civilization VI, as the recessional music at the wedding ceremony of two choir members. Next year we’ll have the choir’s own 45th anniversary celebrations – and, of course, the new run of The Magic Flute! The first music rehearsal for the latter is scheduled to take place just a couple of weeks from now. Will be interesting to see how much of the music we can still remember, although the real challenge will come in December when we start relearning the choreographies… 

Still alive

I am indeed! Barely, but still. Once again blogging has been forced to take a back seat, but I thought I should do one more post before my vacation – which, happily, is right around the corner. No big deadlines before that, just some exam marking plus a bunch of writing that I can pick up from where I left off when I come back to work in August. Next week will be more like a half week because of the faculty’s staff summer party and the Midsummer weekend, and after that there’s just one week of work left before I’m free. Seems too good to be true! 

The AI ethics course is happily finished by now: lectures given, assignments evaluated, grades entered into Peppi. Again, it was a lot of work, but also rewarding and enjoyable. There are always at least a couple of students who really shine, turning in excellent assignment submission after another, and those alone are enough to make it all worthwhile. However, a big part of the enjoyment is also that I can use the course as a test lab of sorts, changing things a bit and trying something new each time, seeing what works and what doesn’t. This time I made some changes to the assessment criteria and practices, which seemed to work, so I think I’ll continue in the same direction next year with the teaching development project that I need to do as part of my university pedagogy studies. 

Of course, there’s always new things happening in the world of AI, so the course contents also need some updating each year. This spring, for obvious reasons, the ethical implications of generative AI tools kept popping up under various course themes, and I also encouraged the students to try ChatGPT or some other such tool at least once to generate text for their assignment submissions. There were certain rules, of course: I told the students that they must document their use of AI, critically examine the AI outputs and take responsibility for everything they submit, including any factual errors or other flaws in AI-generated text. The results of the experiment were a bit of a mixed bag, but at any rate there were some lessons learned, for myself and hopefully for the students as well. If you won’t trust students to use AI ethically on an AI ethics course, where then? 

The most recent big news related to AI ethics is that the European Parliament voted this week to adopt its position on the upcoming AI Act, so the regulation is moving forward and it may well be that on next year’s course we will be able to tell the students what it looks like in its final form. The parliament appears to have made some substantial changes to the bill, expanding the lists of prohibited and high-risk applications and specifying obligations for general-purpose AI systems while making exemptions for R&D so as not to stifle innovation. It will be extremely interesting to see what the impact of the act will be – on AI development and use, of course, but also on AI regulation elsewhere in the world, since this is very much a pioneering effort globally. 

After my summer holiday I’ll need to hit the ground running, because I’m once again giving some AI ethics lectures as part of a learning analytics summer school. A new thing this year is that I’m also preparing an ethics module for a new Master’s programme in sustainable autonomous systems, a collaboration between my university and the University of Vaasa. I don’t mind the new challenge at all – I took it upon myself more or less voluntarily, after all – but it does mean that my job title is increasingly at odds with what I actually do. Still, I’ve managed to fit in some research as well, and starting in the autumn I’ll even be participating in a proper research project for a change.

One of the highlights of the spring is that I got a paper accepted to Tethics 2023 – or rather, I supervised a student who got a paper accepted, which feels at least as rewarding as if I’d done the research myself, if not more so. In any case, it looks like I’ll be visiting Turku for an ethics conference for the third year running, and I really wouldn’t mind if this became a tradition! I’m even looking forward to the networking aspect, which I’m usually pretty bad at. Somehow ethics conference are different and Tethics especially – partially because it’s so small, I suppose, but perhaps also because these people are my tribe? 

Musically, the spring term was very successful. After The Magic Flute we appeared in two concerts with Oulu Sinfonia – one of them sold out – performing music by the late great Ennio Morricone. Sadly, we then parted ways with our musical director of many years, which forced some planned events to be cancelled / postponed / scaled down, but everyone seems determined to keep the motor running and overall I feel pretty good about the future of the choir. There will be some big things happening late this year and early the next, including (but not limited to) another run of the opera in January and February. Three out of eleven shows are sold out already, so if you missed it this year, get your ticket now! 

“It belongs in a museum!”

After a three-week summer holiday, I returned to work last Monday. I say “returned to work”, but what I actually did was hop on a train and travel to Turku to attend the Ethicomp 2022 conference at the School of Economics. After two and a half days of hard conferencing, I departed for Oulu on Thursday afternoon, leaving only Friday as a “normal” workday before the weekend. I can imagine, and have in fact experienced, much worse ways to come back after a vacation! 

I felt more anxious than usual about my own presentation, scheduled for late afternoon on the first day. This was partially because I like to prepare and rehearse my presentations well in advance, but this time I hadn’t had time to finish my slides before my vacation nor an inclination to work on them during it, so I more or less put my deck together on the train and then rehearsed the talk in my hotel room. On Tuesday I skipped the session immediately before mine to flick through my slides a few more times and make some last-minute tweaks, and I eventually emerged from my mental cocoon reasonably confident that I would get through the whole thing without stumbling. 

I still wasn’t that confident about how the presentation would be received, because the paper I was presenting is probably the strangest one I’ve written to date. Long story short, one day I was preparing materials for the introductory lecture of the AI ethics course and explaining the concepts of moral agency (the status of having moral obligations) and patiency (the status of being the subject of moral concerns). Artificial things are traditionally excluded from both categories, but there is an ongoing debate in philosophy of AI about whether a sufficiently advanced AI system could qualify as a moral agent and/or patient. 

The idea that struck me was that if we let go of (organic) life as an analogy and view AI systems as cultural artifacts instead, we can sidestep the whole debate on whether AI can become sentient/conscious/whatever and make the moral patiency question a good deal more relevant to practical AI ethics in the here and now. After all, many people feel sad when an artifact of great cultural significance is destroyed (think Notre-Dame de Paris), and downright outraged if the destruction is wilful (think the Buddhas of Bamiyan), so it doesn’t seem too much of a stretch to argue that such artifacts have at least something closely related to moral patiency. Could an AI system also qualify as such an artifact? I filed the question in my brain under “ideas to come back to at an opportune moment”. 

The moment came in January: I wasn’t terribly busy with anything else right after the holidays, Ethicomp had a call for papers open and I only needed to write a 1500-word extended abstract to pitch my idea. I did wonder if it might be a bit too outlandish, which in retrospect was silly of me, I suppose – philosophers love outlandish ideas! The reviews were in fact fairly enthusiastic, and in the end my presentation at the conference was also well received. I was able to have some fun with it even, which is not something I often manage with my conference talks, and I soon got over my nagging feeling of being an impostor, a lowly computer scientist who arrogantly thinks he’s qualified to talk philosophy. 

In retrospect, I also have to say I did manage to turn that extended abstract into a pretty well written full paper! It’s not officially published yet, but it argues that 1) yes, AI systems can be artifacts of considerable cultural significance and therefore intrinsically worthy of preservation, 2) they constitute a category of artifact that cannot be subsumed under a broader category without losing essential information about their special nature, and 3) this special nature should be taken into account when deciding how to preserve them. The argumentation is fairly informal, relying largely on intuition and analogy, but I’m quite proud of the way it’s built and presented nonetheless. Sure, the paper is only tangentially related to my daily work and is likely to be a total one-off, but even the one-offs can sometimes have a bigger impact than you’d expect – there’s another one of mine, also an ethics paper, that was published 15 years ago but is still getting citations. 

Apart from surviving my own presentation, for me the highlight of the first day, and indeed the whole conference, was the keynote Scaling Responsible Innovation by Johnny Søraker. I’d met Johnny before on a couple of occasions, originally at the ECAP 2006 conference in Trondheim where he was one of the organisers, but hadn’t seen him for ages. Turns out he’s now working as an AI ethicist for Google, which the more cynically minded among us might remark sounds like a contradiction in terms, but be that as it may, he gave an insightful and entertaining talk on the challenges faced by SMEs wanting to do responsible innovation and how they can address those challenges. I particularly liked the idea of having an “interrupt”: someone who is kept informed of everything going on in the company and has been trained to spot potential ethics issues. The obvious advantage is that it doesn’t matter how convoluted or ad-hoc the innovation process is – as long as there is this one node through which everything passes at some point, risks can be identified at that point and brought to the attention of someone qualified to make decisions on how to mitigate them. 

Among the regular presentations there were several AI-related ones that I found very interesting. The one that resonated with me the most was Sara Blanco’s talk, in which she criticised what might be called a naive, “one-size-fits-all” conception of AI explainability and argued for a more nuanced one that acknowledges the need to account for differences in background knowledge and prior beliefs in the formulation of explanations. In light of my recent exposure to constructivist theories of learning, which likewise emphasise the effect of the learner’s existing knowledge structures on the process of integrating new knowledge into those structures, this made a great deal of sense to me. Outside the realm of AI, I very much enjoyed Reuben Kirkham’s talk on the impact on academic freedom of the unusual relationship between academia and industry in computer science, as well as Michael Kirkpatrick’s on the problematic nature of direct-to-consumer genomic testing services such as 23andMe, something I’ve brought up myself in my data ethics lectures. 

The social programme was top notch too. On Wednesday evening we were first treated to a glass of sparkling and some live classical music at the Sibelius Museum, where we had about an hour to roam and explore the collections, which even included some instruments for visitors to try out – I couldn’t resist having a go on the Hammond organ, of course. After this we enjoyed a very tasty three-course dinner, with more live music, at restaurant Grädda next door. From the restaurant we proceeded to a pub for more drinks and chats, and when the pub closed, some of my fellow delegates went to find another one to have a nightcap in, but by that point I was quite ready for bed myself so I headed straight to my hotel. 

This was my first Ethicomp conference, but I certainly hope it wasn’t my last. I’ve always found philosophy conferences highly stimulating, as well as welcoming to people of diverse academic backgrounds, so despite my anxieties, me not being a “proper” philosopher has never been a real issue. After CEPE 2009 I more or less lost touch with the tech ethics community for a whole decade, but recently I’ve been sort of working my way back in: first there was the special session at IEEE CEC 2019, then Tethics 2021, and now this. Ethicomp in particular is apparently the one that everyone in the ethics of computing community wants to go to, and having now been there myself, I can see why. The next one will be in 2024, so I guess I have about a year and a half to come up with another weird-but-compelling idea? 

That’s a wrap, folks

A paper I wrote with Alan Smeaton, titled “Privacy-aware sharing and collaborative analysis of personal wellness data: Process model, domain ontology, software system and user trial”, is now published in PLOS ONE. In all likelihood, this will be the last scientific publication to come out of the results of my MSCA fellowship in Dublin, so I’m going to take the risk of sounding overly dramatic and say it kind of feels like the end of an era. It took a while to get the thing published, but with all the more reason it feels good to be finally able to put a bow on that project and move on to other things.

So what’s next? More papers, of course – always more papers. As a matter of fact, the same week that I got the notification of acceptance for the PLOS ONE paper, I also got one for my submission to Ethicomp 2022. As seems to be the procedure in many ethics conferences, the paper was accepted based on an extended abstract and the full paper won’t be peer-reviewed, so as a research merit, this isn’t exactly in the same league as a refereed journal paper. However, since the conference is in Finland, I figured that the expenditure would be justifiable and decided to take this opportunity to pitch an idea I’d been toying with in my head for some time. 

To be quite honest, this was probably the only way I was ever going to write a paper on that idea, since what I have right now is just that: an idea, not the outcome of a serious research effort but simply something I thought might spark an interesting discussion. Since I only needed to write an extended abstract for review purposes, I could propose the idea without a big initial investment of time and effort, so it wouldn’t have been a huge loss if the reviewers had rejected it as altogether too silly, which I was half expecting to happen. However, the reviewers turned out to agree that the idea would be worth discussing, so Turku, here I come again! That’s the beauty of philosophy conferences  in my experience – they’re genuinely a forum for discussion, and I’ve never felt excluded despite being more of a computer scientist/engineer myself, which I presume has a lot to do with the fact that philosophers love to get fresh perspectives on things. 

The idea itself is basically an out-of-the-box take on the notion of moral patiency of AI systems, and I will talk about it in more detail in another post, probably after the conference. Meanwhile, a follow-up to our Tethics 2021 paper on teaching AI ethics is at the planning stage, and I have the idea for yet another AI ethics paper brewing in my head. Since I returned to Finland and especially since I started working on the AI ethics course, I’ve been trying to raise my profile in this area, and I have to say I’m fairly pleased at how this is turning out. Recently I had a preliminary discussion with my supervisor about applying for a Title of Docent with AI and data ethics as my field of specialisation, although I haven’t actually started preparing my application yet. 

The AI ethics course is now past the halfway point in terms of lecturing, and my own lectures are all done. I started this year’s course with my head full of new ideas from the university pedagogy course I recently completed, and some of them I’ve been able to put to good use, while others have not been so successful. I’ve been trying to encourage the students to participate more during lectures instead of just passively listening, and low-threshold activities such as quick polls seem to work pretty well, but my grand idea of devoting an entire teaching session to a formal debate met with a disappointing response. I don’t very much like the idea of forcing the students to do things they’re not motivated to do or don’t feel comfortable with, but I also don’t have a magic trick for enticing the students out of their comfort zone, so I’m not sure what to do here. I suppose I could settle for the small victories I did manage to win, but I still think that the students would really benefit from an exercise where they have to interact with one another and possibly adopt a position they don’t agree with. Oh well, I have another year now to come up with new ideas for them to shoot down. 

Meanwhile, in the choir things are getting fairly intense, with three rehearsal weekends over the past four weeks, two for the whole choir and one for just the tenor section – although to be quite honest, during the latter we sang a grand total of one of the songs included in the set of the spring concert. We also have performances coming up on May Day and in the university’s Doctoral Conferment Ceremonies on the 28th of May, so there’s a lot of material to go through over the next month and a half. Immediately after the March reheasal weekend I tested positive in a COVID home test, so the dreaded bug finally caught up with me, something I’d been expecting for a while actually. It was a mild case, but still unpleasant enough that I wouldn’t fancy finding out what sort of experience it would be without the vaccine. 

While on the subject of music, I can’t resist mentioning that I signed up to sing in the chorus in a production of The Magic Flute in January-February next year! That’s a first for me – I’ve been in the audience for plenty of operas, but never on the stage. I’m slightly dreading the amount of time and effort this will require, but in the end I just couldn’t pass up the opportunity. There is still the caveat that if there are more people eager to sing than there are open positions, we may have to audition, but an oversupply of tenors is not a problem that frequently occurs in the choral world. The rehearsal period won’t start until much later in the year, but I’m already a little bit excited at the prospect! 

Words and music

The proceedings of Tethics 2021 are now available for your viewing pleasure at ceur-ws.org. This means that both of the papers I presented during my two-conference streak in October are now (finally!) officially published! Although I’ve mentioned the papers in my blog posts a few times, I don’t think I’ve really talked about what’s in them in any detail. Since they were published at more or less the same time, I thought I’d be efficient/lazy and deal with both of them in a single post. 

At Tethics I presented a paper titled “Teaching AI Ethics to Engineering Students: Reflections on Syllabus Design and Teaching Methods”, written by myself and Anna Rohunen, who teaches the AI ethics course with me. As the title suggests, we reflect in the paper on what we took away from the course, addressing the two big questions of what to teach when teaching AI ethics and how to teach it. In the literature you can find plenty of ideas on both but no consensus, and in a sense we’re not really helping matters since our main contribution is that we’re throwing a few more ideas into the mix. 

Perhaps the most important idea that we put forward in the paper is that the syllabus of a standalone AI ethics course should be balanced on two axes: the philosophy-technology axis and the practice-theory axis. The former means that it’s necessary to strike a balance between topics that furnish the students with ethical analysis and argumentation skills (the philosophy) and those that help them understand how ethics and values are relevant to the capabilities and applications of AI (the technology). The latter means that there should also be a balance between topics that are immediately applicable in the real world (the practice) and those that are harder to apply but more likely to remain relevant even as the world changes (the theory). 

The paper goes on to define four categories of course topics based on the four quadrants of a coordinate system formed by combining the two axes. In the philosophy/theory quadrant we have a category called Timeless Foundations, comprising ethics topics that remain relatively stable over time, such as metaethics and the theories of normative ethics. In the philosophy/practice quadrant, the Practical Guidance category consists of applied ethics topics that AI researchers and practitioners can use, such as computer ethics, data ethics and AI ethics principles. In the technology/practice quadrant, the Here and Now category covers topics related to AI today, such as the history and nature of AI and the ethical issues that the AI community is currently dealing with. Finally, the technology/theory quadrant forms the category Beyond the Horizon, comprising more futuristic AI topics such as artificial general intelligence and superintelligence. 

A way to apply this categorisation in practice is to collect possible course topics in each category, visualise them by drawing a figure with the two orthogonal axes and placing the topics in it, and drawing a bubble to represent the intended scope of the course. A reasonable way to start is a rough circle centered somewhere in the Here and Now quadrant, resulting in a practically oriented syllabus that you can stretch towards the corners of the figure if time allows and you want to include, say, a more comprehensive overview of general ethics. The paper discusses how you can use the overall shape of the bubble and the visualisation of affinities between topics to assess things such as whether the proposed syllabus is appropriately balanced and what additional topics you might consider including. 

On teaching practices the paper offers some observations on what worked well for us and what didn’t. Solidly in the former category is using applications that are controversial and/or close to the students’ everyday lives as case studies; this we found to be a good way to engage the students’ interest and to introduce them to philosophical concepts by showing how they manifest themselves in real-world uses of AI. The discussion on Zoom chat during a lecture dedicated to controversial AI applications was particularly lively, but alas, our other attempts at inspiring debates among the students were not so successful. Online teaching in general we found to be a bit of a double-edged sword: a classroom environment probably would have been better for the student interaction aspect, but on the other hand, with online lectures it was no hassle at all to include presentations, demos and tutorials by guest experts in the course programme. 

The other paper, titled “Ontology-based Framework for Integration of Time Series Data: Application in Predictive Analytics on Data Center Monitoring Metrics”, was written by myself and Jaakko Suutala and presented at KEOD 2021. The work was done in the ArctiqDC research project and came about as a spin-off of sorts, a sidetrack of an effort to develop machine learning models for forecasting and optimisation of data centre resource usage. I wasn’t the one working on the models, but I took care of the data engineering side of things, which wasn’t entirely trivial because the required data was kept in two different time series databases and for a limited time only, so the ML person needed an API that they could use to retrieve data from both databases in batches and store it locally to accumulate a dataset large enough to enable training of sufficiently accurate models. 

Initially, I wrote separate APIs for each database, with some shortcut functions for queries that were the most likely to be needed a lot, but after that I started thinking that a more generic solution might be a reasonably interesting research question in itself. What inspired this thought was the observation that while there’s no universal query language like SQL for time series databases, semantically speaking there isn’t much of a difference in how the query APIs of different databases work, so I saw here an opportunity to dust off the old ontology editor and use it to capture the essential semantics. Basically I ended up creating a query language where each query is represented by an individual of an ontology class and the data to be retrieved is specified by setting the properties of this individual. 

To implement the language, I wrote yet another Python API using a rather clever package called Owlready2. What I particularly like about it is that it treats ontology classes as Python classes and allows you to add methods to them, and this is used in the API to implement the logic of translating a semantic, system-independent representation of a query into the appropriate system-specific representation. The user of the API doesn’t need to be aware of the details: they just specify what data they want, and the API then determines which query processor should handle the query. The query processor outputs an object that can be sent to the REST API of the remote database as the payload of an HTTP request, and when the database server returns a response, the query processor again takes over, extracting the query result from the HTTP response and packaging it as an individual of another ontology class. 

Another thing I love besides ontologies is software frameworks with abstract classes that you can write your own implementations of, and sure enough, there’s an element of that here as well, as the API is designed so that it’s possible to add support for another database system without touching any of the existing code, by implementing an interface provided by the API. It’s hardly a universal solution – it’s still pretty closely bound to a specific application domain – but that’s something I can hopefully work on in the future. The ArctiqDC project was wrapped up in November, but the framework feels like it could be something to build on, not just a one-off thing. 

In other news, the choir I’m in is rehearsing Rachmaninoff’s All-Night Vigil together with two other local choirs for a concert in April. It’s an interesting new experience for me, in more than one way – not only was I previously unfamiliar with the piece, I had also never sung in Church Slavonic before! It turns out that the hours and hours I spent learning Russian in my school years are finally paying off, albeit in a fairly small way: the text has quite a few familiar words in it, I can read it more or less fluently without relying on the transliteration, and the pronunciation comes to me pretty naturally even though my ability to form coherent Russian sentences is almost completely gone by now. It’s still a challenge, of course, but also a beautiful piece of music, and I’m already looking forward to performing it in concert – assuming, of course, that we do get to go ahead with the performance. Because of tightened COVID restrictions, we won’t be able to start our regular spring term until February at the earliest, so I’m not taking anything for granted at this point… 

A welcome breather

Another month is coming to an end, and quite a month it has been. Yesterday I finished a streak of two conferences virtually back to back, with only a weekend in between. It’s not an experience I would particularly care to repeat anytime soon – too much stress compressed into such a tight space. At least I was able to attend the second one from the comfort of my home.

Not that I minded travelling to last week’s conference – on the contrary, I thoroughly enjoyed it, apart from the bit where I had to sit on a train for 6+ hours on Wednesday and again on Friday. Tethics 2021 was held at Turku School of Economics with online participation option; apparently about half of a total of 70 registered participants had signed up as in-person attendees. All of us who were physically there were Finnish, with the exception of one Greek professor working in Sweden. I was slightly disappointed that I didn’t get to meet Charles Ess, who was one of the keynote speakers and whom I’d previously met 15 years ago in Trondheim, but then, I very much doubt that he would have remembered me anyway.

The conference was small: four regular sessions with eleven papers altogether, a special session with a presentation by Don Gotterbarn, and two keynotes, Charles Ess on Thursday and Leena Romppainen, president of Electronic Frontier Finland, on Friday. Leena’s talk was a particular highlight for me, an entertaining journey through Effi’s 20-year history of defending digital rights and the “moments of despair and triumph” along the way, as promised by the subtitle of the presentation. Incidentally, yesterday and today the District Court of Helsinki has been hearing a case where Effi and some of its board members are accused of illegal fundraising, the latest episode in a saga almost as old as Effi itself. The contested issue seems to be whether the association was within its legal rights to publish a bank account number for donations on its website – hardly a heinous crime, but unfortunately a golden opportunity for less civil rights-minded actors to brand the defendants as scammers if they are convicted.

My own presentation went pretty well; I had a slight issue with presentation time, only it wasn’t the one I was expecting beforehand. Like all the other speakers in the regular sessions, I had a 30-minute slot, and when I was preparing my slides I genuinely wondered how I was going to fill it. I consoled myself with the thought that at an ethics conference there’s likely to be some real discussion at the end of the presentation, so perhaps even just 20 minutes of talking will do fine, but it turns out I had no problem at all using up my half hour and there was time for no more than one quick question at the end! Apparently there is a talker in me after all, when the topic’s right.

The social programme was great too, definitely reason enough to attend the conference in person. Besides coffee and lunch breaks, on Wednesday evening there was a welcoming event, basically ten-ish people sitting around a table in a meeting room sipping sparkling wine and chatting about random stuff, with dinner afterwards for those of us who were hungry. On Thursday there was another dinner, with drinks in pubs before and after. As I was sipping my last pint before bed, I listened to the Conference Chair and the aforementioned Greek professor having a passionate discussion on Heidegger – not something that tends to happen at more technical conferences, even after hours!

Indeed, the experience was very different this week when I participated in IC3K 2021. I chaired one session, presented my own paper in another and attended a third as a listener, and I think I heard a grand total of one audience question. There were the semi-obligatory courtesy questions by the session chairs, of course, but those don’t really count. I suppose these online conferences are not the most conducive to interaction, but even so, it’s certainly my experience that at philosophical conferences there’s a lot more actual discussion of the presented papers than at technical ones. Still, I have to hand it to the conference organisers, there was no shortage of available interaction channels: in addition to the conference sessions on Zoom, there was a Slack workspace, a discussion forum for each individual paper on PRIMORIS, plus whatever contact details (email addresses, Twitter handles, Skype names etc.) the delegates themselves had chosen to share.

Now, with the conferences done and the videos for my own Towards Data Mining lecture scripted, recorded and released, I suddenly find myself in a situation where there’s nothing to be particularly stressed about looming in the immediate future. I’m sure there’s something new around the corner, but perhaps I’ll have at least a week or so to savour the feeling. Also, it’s less than two months till Christmas – less than two months of work left in 2021, would you believe it. I’m really looking forward to the holiday season actually, because this year it means choir concerts again! Keep watching cassiopeia.fi for announcements.