The world is a stage

Another November, another Tethics! This year, the conference was hosted by the University of Vaasa and co-chaired by Ville Vakkuri, who has appeared several times on my AI ethics course as a guest lecturer. As usual, there were a bunch of other familiar faces as well, so in terms of social interaction, the conference was a nice mix of catching up with old acquaintances and getting to know new ones. Vaasa itself was a new acquaintance for me and quite a lovely one at that, insofar as any Finnish city in November can be described as “lovely”. On and near the university campus there were some cool old red-brick industrial buildings, reminiscent of the Finlayson Area in Tampere.

The conference program had a couple of new elements this year. On the first morning, there was a workshop with three papers that the participants were invited to help improve, but I decided to skip it, because I had my own talk in the first regular session that afternoon and wanted to do some rehearsing. The other new thing was a poster session, which followed immediately after I’d given my presentation, and I ended up chatting for a good while with a doctoral candidate from the University of Turku who’s researching the ethics of autonomous weapons, a topic I’ve had some involvement with since my talk in the Seminar on the Art of Cyber Warfare a year ago.

I also touched upon the subject in my own paper, which had the somewhat provocative title “Death by AI: A Survey of the Literature and Known Incidents”. I’ll write about it in more detail once it’s been officially published – which may well be next year, the CEUR-WS process tends to take its time apparently – but in a nutshell, I searched for academic literature associating AI with death, did the same for fatal AI incidents recorded in public databases, analysed the search results to see what sort of themes emerge from them and put the analyses next to each other to see if there are any interesting observations to be made. Not the most rigorous piece of research out there, but it seemed to engage the audience, and it certainly gave me a lot of ideas for future work.

My session was preceded by a keynote speech by Rachael Garrett, which I have to admit went a bit over my head at times, but I did find her experiments with dancers improvising with robots rather cool. The last session of the day had two papers about AI in education, so that of course was right up my alley. On the second day there was some more interesting AI stuff, a particular highlight being a paper on the perpetuation of gender stereotypes by AI image generators; as a bonus, I got to witness the first-ever use (in my career at least, if not the entire history of academia) of the phrase “slay queen” to comment on a conference presentation. In the afternoon there was a town hall meeting, where it was decided that the next Tethics will be organised by LUT University in Lahti (yay!), and then it was time for Kai Kimppa to conclude the programme with his keynote on the past, present and future of IT ethics research in Finland. Kai’s speech made for a very enjoyable end to the conference, and not just because I got name-checked as one of the “new generation” of Finnish IT ethicists!

The week before the conference I got some exciting news: my docentship application received the rector’s seal of approval, so as of the first of December I’m officially a docent of AI ethics and data ethics in the Faculty or Information Technology and Electrical Engineering, University of Oulu. Feels pretty good! This came just in time for me to put the title in my CV for the winter call of the Research Council of Finland, which closed on the 12th. For a variety of reasons, I didn’t have a whole lot of time and energy to spend on my proposal, so I ended up submitting essentially the same one as last year, with some minor revisions to the research plan and a slightly fuller CV. I suppose I can view this as an experiment of sorts – will be interesting to see how the evaluator statements compare to the ones I received this year.

At work, things are now starting to calm down a bit towards the end-of-year holidays, but meanwhile, in the world of performing arts it’s getting busy. Last week we had the first proper rehearsals for the Ovllá opera: not just the chorus but director, soloists, conductor, rehearsal pianist, the works. This week there have been no rehearsals, but next week we’re bringing A Christmas Carol back to the stage, and the week after that the opera rehearsals will resume. I also recently got the notification that I’ve been selected into the choir for Beyond the Sky, so I’m three for three for the big 2026 productions I auditioned for back in May.

Working on the opera is an interesting experience, different from The Magic Flute in a couple of major respects. A rather obvious one is that instead of staging one of the most popular operas ever for the nth time, we are now creating something totally new, to be presented to the world for the very first time right here in Oulu. It’s an exciting thought, but at the same time, I’m very much aware that it’s hardly a safe bet. Will it bring in the crowds, not just the hardcore opera lovers? Not that I’ll have to answer to anyone if it doesn’t, but I do feel like I have my own tiny share of artistic ownership of the production and naturally I’m hoping that it will be a success.

The other big difference is in the level of cultural sensitivity constantly present at the rehearsals. The score brings together two very different musical traditions, and the libretto deals with some rather delicate themes; The Magic Flute has its own issues, sure, but at its heart it’s just a silly fairytale set in a fantasy world. With Ovllá, distancing ourselves from the story and dialogue is not an option, and it’s been clear from the get-go that the portrayal of the Sámi people and Sámi culture must be accurate and respectful. To that end, almost everyone in the design team is Sámi, as are the soloists playing Sámi characters.

That said, the rehearsals have been great fun as well as educational. I was already loving the music, and now that we’re starting to get an idea of what the opera is going to look like on stage, I’m getting properly stoked about it. I know already that there will be days when I’ll come home from work and dearly wish I could spend the evening on the couch instead of going to the theatre, but all things considered, a job in academia where the hours are very flexible is probably one of the easier ones to combine with a hobby like this. Besides, performing to an audience and working in a multicultural environment are surely skills that transfer both ways. Highly recommended!

Teachers still matter, right?

Starting with music news this time, because the new choir term is really off to a flying start: we’ve had seven performances already, with another one coming up tomorrow. A particularly memorable occasion was the grand opening of Nokia’s new “Home of Radio” campus in Oulu, where we had the honour of both opening and concluding the proceedings as well as providing music for the ribbon-cutting ceremony with Christopher Tin’s beautiful and appropriately jubilant Sogno di Volare from the video game Civilization VI. Various luminaries were in attendance, including the President of the Republic of Finland himself, Alexander Stubb. 

It’s also been confirmed now that I will be appearing in the chorus of Ovllá, the new opera composed by Cecilia Damström for Oulu’s year as a European Capital of Culture. One of the performance languages is Northern Sámi, which I have next to no familiarity with despite it being one of Finland’s national languages, but I’ll take it as a challenge and an opportunity to learn. Another challenge is the sheer amount of work I’ve committed myself to: from late November to late April I’m going to have one of the busiest six-month periods of my life, as rehearsals for the opera will run in parallel with rehearsals and performances of A Christmas Carol, and around the time of the last opera performances, rehearsals for MASS will begin. How much Cassiopeia stuff I’ll be able to fit in among all this remains to be seen, but the usual Christmas concerts at least should be perfectly doable. 

On the academic front, I spent the first few work weeks after my summer vacation mostly preparing materials for a training module that I’m developing for a new thing the university’s graduate school is launching this year called the PhD Supervisors’ Academy. Among the things offered by the Academy is a set of short online courses on a range of topics that a PhD supervisor should know about, and I was invited to create one on AI. This being me, there will be a notable emphasis on critical thinking and responsible use, but I’m doing my best to avoid coming across as overly negative and highlight the opportunities as well. 

At the end of August I had to take some time to work on a conference manuscript that got conditionally accepted for publication, so I had to submit a major revision of it before the final decision. The paper definitely isn’t my best work and I felt there was a genuine possibility it might still be rejected after the revision, but thankfully it wasn’t. The conference in question is Tethics 2025, so my streak of having a paper there continues and I’ll get to do my usual trip to go see some old familiar faces. Should be a more relaxed trip than last year, too, since this time the conference doesn’t clash with any of my artistic engagements, and as a bonus I’ll get to visit a new city. 

My Title of Docent application is moving forward as well, with one (positive) reviewer statement in and another one hopefully coming soon. After that, I’ll need to arrange a date for my demonstration lecture, which does frankly feel a little bit pointless – I’m not all that convinced that one twenty-minute lecture can say anything decisive about me as an educator that my CV and teaching portfolio don’t – but then, it’s not like I’m suffering from any dearth of material from which to put together such a lecture. Besides, I’m due to give a guest presentation on the intersection of AI and ethics next week at Oulun Suomalainen Klubi and my plan is to make the demonstration lecture a compressed version of the presentation with some pedagogical interactions thrown in at strategic points, so I’m kind of killing two birds with one stone here. 

Before my vacation, I was interviewed by a journalist working on an article on the use of AI for content moderation on online platforms, more specifically for the detection of hate speech. He contacted me at the suggestion of a colleague of mine and we had a Zoom meeting where I gave him my views on the subject as an AI ethicist. This was my first time appearing as an expert in the media, so quite an exciting experience for me, and I think I managed not to make a complete fool of myself in the process. The article was published in August and is available online, though only in Finnish. 

Another (sort of) new thing in my professional life this academic year is that I’m serving as a teacher tutor for the new batch of students who’ve now begun their studies in the master’s programme in computer science and engineering. It’s only sort of new in the sense that I’ve already been tutoring some students since the beginning of the calendar year, but the new students are the first ones I’m shepherding right from day one. Technically, the most important part of the job is guiding the students in making their personal study plans, but if my experiences from the spring term are any indication, simply listening to the students’ worries and offering encouragement is also a big part of it. 

This got me thinking about how the need for formal tertiary education in subjects such as computing is sometimes questioned on the grounds that there are loads of online resources available that you can use to learn just about any technical skill on your own. It’s even been suggested that AI tutors will make human teachers obsolete by being available 24/7 and adapting perfectly to the student’s learning style and goals. I can’t dismiss such arguments entirely, but I think they’re assuming some kind of “ideal” student who’s crystal clear on what they need to learn and perfectly self-directed in finding and using the required resources. For all those “non-ideal” students, a university provides a structure for your studies and a social environment designed to carry you through them and beyond. 

As it happens, both these aspects – providing structure and presenting a human face – are part of the role of the teacher tutor, and before you ask: yes, I’m fully aware of how convenient this conclusion is for me personally. I don’t suppose anyone likes to think of themselves as easily replaceable, so maybe I’m just trying to rationalise the belief that the university and I matter and will continue to matter even as technology marches on. Or maybe I do have a genuine point here that isn’t just about me refusing to go gently into the good night of AI-induced obsolescence. Take your pick! 

MASSive news

Once again, it’s been several months since my last post, for the usual reason – there’s been way too much other, more urgent business to take care of for me to even think about what I might write about in the blog. Conveniently, though, I can now continue directly from where I ended the previous post, starting with some recent research news: the Research Council of Finland has decided not to award funding to my proposed project. I know, it’s a shocker, right? I sometimes wonder if this is even a serious attempt to secure funding anymore, or just a ritual that you participate in out of respect for tradition, but either way, more likely than not I’ll find myself trying again next winter.

I also mentioned a bunch of choir stuff last time. The concerts with Kipinät went well – I got to sing the very first scat solo of my life! – and the trip to NSSS 2025 in Linköping was even more fun than I expected, culminating in a gala concert with 1300 singers and a dinner party with good food, great company, top-notch entertainment and lots of singing and dancing. Just before the trip I had an audition, and a few days after returning I received a notification that I’ve been selected into the choir for Leonard Bernstein’s MASS, which is to have its first ever live performance in Finland as part of Oulu’s European Capital of Culture celebrations in April 2026. I wasn’t previously familiar with the work, but it seems to be totally unlike anything I’ve had a chance to do in my singing career so far, so I’m pretty stoked. In the same audition, they were also looking for singers for the new opera Ovllá as well as Beyond the Sky – combining a new composition by Lauri Porra with astrophotography by Oulu’s own J-P Metsävainio – but I’ve yet to hear back about those.

Right after the Linköping odyssey I kicked off the 2025 conference circuit with the 47th Association for Interdisciplinary Studies Conference, with the lofty title “Shaping the future in the era of polycrisis”, which was organised here in Oulu from the 4th to the 6th of June. I presented my abstract “Keep your enemies close: Embracing AI tools in AI ethics education” in the session “Assessing Interdisciplinary Learning in the Age of AI”, convened by Beverley McGuire, Erica Noles and Carol McNulty from the University of North Carolina Wilmington. I take a considerable amount of professional pride in the fact that I prepared my slides the previous day, had minimal time to rehearse and still gave a really good talk that got a really positive response. The session as a whole was really good too, with a long and lively discussion at the end on how we as educators should deal with AI when assessing students. I would have loved to attend some others, but I was in a rush to finish the grading of the AI ethics course by the end of the week, and also I had a sore throat, so I thought it best to work from home as much as possible in case I had picked up something contagious while traveling.

As it turned out, trying to finish the grading on time was a lost cause, because the sore throat soon developed into a fever that lasted three days and had me unable to do anything resembling work for most of that time. Not exactly surprising that germs have a field day when you have hundreds of people gathering in enclosed spaces and singing at each other for extended periods of time. I’m not sure what it was – our old friend COVID perhaps – but hardly the common cold anyway. After the fever passed, it took a good while to get my strength back and even longer for the sniffling and coughing to stop, but now it feels like things are finally back to normal. Overall, not an experience I’d care to repeat anytime soon, but then, reportedly there’s also been a stomach bug going around, so I guess I should count my blessings and be glad I haven’t caught that.

The spring term was, as it has been for a few years now, dominated initially by evaluations of the applicants to the international master’s programme, then by the AI ethics course. The IMP evaluations were less of an ordeal this year, a welcome change to me personally but perhaps not unequivocally so to the faculty, since this was largely because of a big drop in the number of applications received. There was an applying fee (re-)introduced, which presumably contributed to the drop, but the real kicker is coming next, as universities will be required by law to collect tuition fees that fully cover the costs of providing education for those students who are not eligible to study for free. In practice this means that offering scholarships by default is no longer an option and the price of studying here will go up for students coming from outside the EEA. The vast majority of the applicants have been non-EEA, so I expect there’ll be a significant change in how many applications we get next year and where from; I guess we’ll wait and see.

The AI ethics course was, for me, the most ambitious effort to date: not only did I deliver all the main lectures myself – this was a first – but I also had no teaching assistant this year to help with the learning assignments, so I assessed all of those myself as well. This turned out to be a strain, but still doable, and I was actually quite proud of my planning and execution of the final week’s teaching, which had previously been delivered by a guest. The number of students again increased from last year, and I was even looking at the possibility of forty or more students completing the course this time. The final number won’t be quite that high, but it’s still going to be close to forty and a new record.

Somewhere amid all these I’ve managed to find the time to write a couple of manuscripts and submit them for review, supervise a couple of M.Sc. theses to completion and review a couple of others, contribute to a few funding proposals and participate in the work of the university’s working group preparing guidelines for the use of generative AI in research. I also finally prepared and submitted my Title of Docent application; this was frankly way overdue, but I’d been struggling to find the motivation to get it done, and in the end it took a push from my line manager and an external incentive to give me the boost I needed. Now it’s just waiting until the referee statements come in.

There are a few more loose ends left to tie up before I start my vacation at the end of this week, the main one being a couple of conference papers waiting for peer review. Beyond those, anything else I get done during the rest of the week is a bonus, basically getting a head start on tasks that will be demanding my attention in August. It’s already clear enough that the autumn term is going to be a busy one right from the start, but before that, I should be able to take my usual four weeks off without unduly worrying about what’s ahead. Here’s to surviving yet another academic year!

What is an AI vulnerability anyway?

The proceedings of the Tethics 2024 conference have now been published in the CEUR-WS series, and with that, the paper “What Is an AI Vulnerability, and Why Should We Care? Unpacking the Relationship Between AI Security and AI Ethics” by myself and Kimmo Halunen. There was a bit of an emergency regarding the publication during the Christmas break, as CEUR-WS now requires all published papers to include a declaration of whether and how generative AI tools were used in the preparation of the manuscript, and one of the editors was in quite a hurry to collect this information from the authors. Luckily, I happened to see the editor’s email just a few hours after it came, and since we hadn’t used any AI tools to write the paper, the declaration was easy enough to complete. 

As the title implies, the paper examines the concepts of AI vulnerability and security, looking at how they are understood in the context of AI ethics. As it turns out, they are rather vaguely defined, with no clear consensus within the AI ethics community on what counts as an AI vulnerability and what it means for an AI system to be secure against malicious activity. Collections of AI ethics principles generally recognise the importance of security, but do not agree on whether it should be considered a principle in its own right or rather a component of a more generic principle such as non-maleficence. 

One thing that is quite clear is that the way security is viewed by the AI ethics community differs considerably from the view of the traditional cybersecurity community. For one thing, in the latter there is much less ambiguity on the definition of concepts such as vulnerability, but more fundamentally, the two communities have somewhat different ideas of what the role of security is in the first place. One could say that traditionally, security is about protecting the assets of the deployer of a given system, whereas for ethicists, it’s about protecting the rights of individuals affected by the system; an oversimplification, but one that sheds some light on why the concept of AI vulnerability seems so elusive. 

One consequence of this elusive nature is that it’s difficult to accurately gauge the actual real-world impact of AI vulnerabilities as opposed to hypothetical worst-case scenarios. Much of the paper deals with this issue, discussing the results of a study where I looked for reports of AI vulnerabilities that satisfy four inclusion criteria: there must be a documented incident, it must involve deliberate exploitation of a weakness in an AI system, it must have resulted in demonstrable real-world harm, and the exploited vulnerability must be specifically in an AI component of the system. When I searched six different public databases for such reports, I found a grand total of about 40 entries that could be considered at least partially relevant and only six that were fully relevant. 

This is hardly likely to be the whole picture, and the paper discusses a number of factors that may account for the poor yield to a varying degree. On the other hand, incomplete and biased as the results probably are, they may at least be taken to give a rough but realistic idea of the magnitude of the problem. Silver lining? Perhaps, but it’s only a matter of time before the problem grows from a curiosity into something more serious, and it doesn’t exactly help if we don’t have a decent database for collecting information about AI vulnerabilities, or even a clear enough definition of the concept to enable the development of such a database. 

To be fair, the relationship between security and ethics is not as straightforward as it might seem, at least not when it comes to AI. Security is an important ethics requirement for sure, but it may also be at odds with AI ethics principles such as explainability. Another possible complication is conflicting stakeholder interests; an interesting example of this is the case of Nightshade, a method that artists can use to counter the unauthorised use of their works for the training of text-to-image generative AI models. Technically, this is a data poisoning attack exploiting a vulnerability in the training algorithm, but it’s hard to argue that the artist is doing anything legally or morally wrong here. This serves nicely as a demonstration of why we can’t talk about the security of AI systems without considering the sociotechnical context in which those systems exist in the real world. 

In the category of things that gave me stress during the holidays, submitting the generative AI declaration for the paper was a trivial annoyance in comparison with the winter call of the Research Council of Finland, the submission deadline of which was set on the 8th of January. My application was already looking pretty good when I signed off for Christmas, and for the parts I hadn’t yet completed I was able to reuse quite a lot of material from my previous application, but even so, I was so anxious about the deadline that I went back to work for a few hours already on New Year’s Day. In the end, I made the submission with a good 24 hours to spare, but I have a feeling that the Council will be getting a substantial amount of feedback on the call timetable this year. 

On the performing arts front, I did two shows of A Christmas Carol last week, with two more to go in February. Several people who have seen me perform have remarked on how much I seem to be enjoying myself on stage – I really am, and I’m glad it shows! Meanwhile, Cassiopeia is busy rehearsing for a series of three concerts with the Kipinät choir from Jyväskylä in mid-March, and later in the spring we’ll be traveling to Linköping, Sweden for the Nordic Student Singers’ Summit. 2026 is also looking potentially very interesting already: Oulu will be one of the European Capitals of Culture, and one of the highlights of the year will be a brand-new opera composed and produced for the occasion. So far, there’s very little information available on who will be performing, but if there’s a call for chorus singers, I’ll definitely be putting my hand up. 

Bah, humbug!

November is done, and with that, the last of my speaking engagements for this year. The Tethics conference was once again highly enjoyable, although I have to say I would have preferred not to take the night train to get there; even the absolute best-case scenario was getting six hours of sleep that night, and the reality was probably closer to half of that amount. I could have taken a morning train instead and skipped the beginning of the conference, but there were several AI-related papers scheduled to be presented in the morning sessions and I didn’t want to miss those, so I decided to just bite the bullet and suffer a night of inadequate sleep to catch them.

As it turned out, one of those morning presentations got cancelled, and in its place, the organisers had decided to have an impromptu roundtable on the immediate and not-so-immediate future of the conference. Regarding the former, it was decided that next year’s conference will be hosted by the University of Vaasa – a city I’ve never visited as far as I can recall, so it should be a nice change of scenery. The more general conclusions were largely the same as those of a similar discussion last year: the conference growing bigger and more international is a good thing, as long as it remains true to its original ideals. There was also a consensus that different universities taking turns organising the conference is a good idea, and that a steering committee of Tethics veterans should be formed to provide guidance and support.

After the lunch break it was time for John Danaher’s keynote titled “Do technologies disrupt moral paradigms?”, in which he looked at societal transformations induced / catalysed by technological breakthroughs such as the invention of the cannon. I found the talk highly enjoyable, although the effects of sleep deprivation were starting to get to me, so I wasn’t able to concentrate as fully as I would have liked. My own talk was in the session immediately after the keynote and went smoothly, with the lively follow-up discussion that I’ve come to expect from ethics conferences. I’ll post a summary of the paper later, once the proceedings have been published, but in a nutshell, it looks at how the concept of security is viewed by the AI ethics community (as opposed to the traditional cybersecurity community) and carries out a survey of AI incidents to get an idea of the real-world impact of security vulnerabilities in AI systems.

On the second day of the conference, I decided to sleep in and skip the first session; one badly slept night I can take, but not two in a row if I can help it, and after the conference dinner followed by drinks in a pub the night was pretty much ruined to begin with, even though I didn’t stay out very late and kept my alcohol consumption very moderate. Therefore I took my time to get up and have breakfast at the hotel before hauling myself to the conference in time for Anna Metsäranta’s keynote on “Sustainable AI – from principles to practice”. It was good to have someone from industry to shed light on how things are being done out there in the real world, so this was another highlight for me.

In the last session before the closing of the conference, I participated in the running of a workshop with the lofty title “The current state and future of technology ethics education in Finland”. To be quite honest, most of the work was done by Ville Vakkuri and Kai-Kristian Kemell and my own contribution was rather modest, but nevertheless, it was interesting to have this opportunity to share thoughts on this topic and to get ideas for enhancing the computer science and engineering curriculum in Oulu from the perspective of ethics. The question of timing is a particularly interesting one: when should ethics education be offered? At the very beginning of their studies, the students are perhaps not yet ready to absorb that kind of knowledge, but if we wait until after they’ve finished their bachelor’s studies, it may be too late already. Not everyone needs to be an ethics expert, of course, but I do believe that everyone should be exposed to enough ethics content during their studies to normalise the idea that awareness of ethics is part of what makes a good engineer.

Fast-forward about three weeks and I’m in Helsinki, on the island of Santahamina, in the auditorium building of the Finnish National Defence University for the annual seminar on the art of cyber warfare. Instead of an auditorium, the seminar took place inside a small studio set up with a green screen and a webcasting rig; initially, it felt somewhat silly to have travelled all the way there just to stream my presentation, but in the interest of making sure everything runs smoothly, it made perfect sense. Besides, it made the whole thing look a great deal more professional than having each speaker join from their home / office / wherever. My colleague Kimmo Halunen served as moderator, introducing the speakers and relaying audience questions submitted via chat.

The theme of this year’s seminar was AI on the battlefield, and I had been invited to speak on this theme with my AI ethicist hat on. Since I spend a fair amount of time discussing the ethics of autonomous weapons in one of the lectures of my AI ethics course, I decided to build on that and it worked out quite nicely. Somebody told me that there were close to 500 people online for the stream during my talk, and the feedback I’ve heard seems to indicate that it was well received. I’ve already been invited to contribute in some capacity to a couple of dissertations on autonomous weapons, which I’m taking as a sign that I made a positive impression and managed to get some actual successful networking done. The entire seminar (in Finnish) is available to view on YouTube, with my talk starting about 44 minutes in.

Now that I’m apparently finished with the speaking circuit for 2024, it’s a good time to reflect a bit. Based on my experience, I would say that I’m actually quite adaptable and versatile, capable of dealing effectively with a variety of audiences, but where I’m at my best – and what I also enjoy the most – is academic seminars. It’s like taking the best of both worlds from lectures and conference presentations: instead of being limited to the scope of a single paper, I get to draw broadly on my expertise and interests to prepare my talk, but I still get to speak primarily as a researcher rather than a teacher, so I can be more relaxed when it comes to the pedagogical aspect. I feel like I can really express myself within those parameters, and it’s always a delight to discover new avenues for that.

Speaking of self-expression, A Christmas Carol has been running for about a month now and is off to a very strong start: the reviews I’ve seen have been highly positive, and all 2024 performances have been sold out for a good while now. 2025 is very much a different matter, and I suppose it’s not surprising that people are much keener to see the play before Christmas than after, but hopefully they won’t lose interest altogether if they didn’t manage to get tickets for before. It’s been great so far, but I suspect that we’re all going to be sick of carols by February, and it certainly won’t help if we’re singing them to an empty house. The demands of the play have been such that I’ve had to prioritise theatre over choir rather heavily, but I’ve managed to squeeze in just enough rehearsal time with Cassiopeia to sing in our Christmas concerts without embarrassing myself, so art-wise, it’s been quite a productive end of the year!

Christmas itself is just a couple of weeks away, so this is in all likelihood my last post of the year. As I’m writing this, I don’t yet have an employment contract for the coming year, but that’s hardly anything out of the ordinary and I expect it will be sorted out soon. If it’s not – well then, get in touch if you need someone to play some music or to give a talk on AI and I’ll get back to you with a quote, I guess?

Deck the halls

It’s been a weird couple of months since I came back from my summer vacation. I haven’t kept track of how my working time has been split among the tools I’ve used, but if I had, I’m pretty sure that number one on the list would be PowerPoint. So many lectures and presentations! I guess it’s good that I get to work on my communication skills, and I do even quite enjoy it when I get to give a well-prepared presentation on a topic I’m genuinely interested in and have something original to say about, but still, enough is enough. I’m hoping this is just a temporary state of affairs, but if not, I may need to work on my saying no to speaker invitations skills.

Indeed, 2024 is already a record year for me in terms of the number of various speaking engagements I’ve had. There are two major reasons for this, the first one being the Reboot Skills project, in which I designed and implemented a course titled Data Governance and Privacy. In addition to the course sessions – three main ones in Finnish, plus an additional one in English in collaboration with the University of Limerick – I’ve attended at least three industry events where I spoke on the subject and pitched the course before it began. Despite these efforts, the course attracted a disappointingly small number of participants, but even so, I’m quite happy to lay it to rest for now and focus on other things.

The other reason is my work on AI ethics, which has gotten me invited to a bunch of seminars recently. This semester I’ve already participated in two: in August, there was a university pedagogy seminar where I presented again the results of my pilot study on integrating AI tools into AI ethics teaching, and a week ago I spoke on responsible AI in research in a seminar organised by the university’s Ethics Working Group. Coming up next is the Tethics conference, where I will both present a paper and co-host a workshop on technology ethics education, and at the end of November comes a seminar at the Finnish National Defence University in Helsinki, where I’m slated to give my perspective as an AI ethicist on the topic of AI in the battlefield. Nothing yet scheduled for December, but there’s still time…

Tethics, for me, is going to be a somewhat more hurried affair this year. I will be there for the whole duration of the conference, but instead of traveling the day before as I normally would, I’m going to take a night train that arrives in Tampere in the morning of the first day. The reason for this is that I have commitments in Oulu that prevent me from leaving much earlier than midnight on the night between the 5th and 6th of November. More specifically, on the evening of the 5th is the first of three dress rehearsals for a stage adaptation of Dickens’s A Christmas Carol at Oulu City Theatre, and I won’t be in town for the other two so I can’t afford to skip it.

That’s right, I’m going to be back on stage, less than a year after the end of The Magic Flute! The director is the same, and when I heard she was looking for singers for this production, it didn’t take me too long to decide that I want in. The only reason why I needed any time at all to think about it was that the rehearsals clash with those of Cassiopeia, so I’ve been mostly absent from the choir since the beginning of September. However, it’s not that often that an opportunity like this turns up, and the only big choir thing remaining this year is the traditional Christmas concert, so I figured now’s not the worst time to take a little break.

Compared to the opera, working on the play is notably different in a few respects. For one thing, instead of a whole chorus of forty singers there’s only a quartet, and we also have significantly more time on stage, so I have a bigger role now, even though I’m not playing an actual named character. I even have a couple of spoken lines! I’m also officially employed by the theatre this time – the pay is hardly worth mentioning, but just the fact that I’m getting money for something I’m basically doing as a hobby is pretty cool.

Artistically speaking, the biggest difference is that we’re not on stage as singers, but rather as actors playing singers. This may seem like semantic quibbling but is actually a significant distinction, as everything we do on stage must be in service of the story. To some extent this was the case also with The Magic Flute, but surely it would have been too sacrilegious to touch Mozart’s music, no matter what the director’s vision is calling for. Here, on the other hand, it’s often the case that we don’t get to sing a song all the way through because the rhythm of the scene doesn’t allow it, and on a couple of occasions we get interrupted mid-verse by stage events. Apart from that, everything feels quite natural and I’m really happy and excited to be doing theatre again.

Another thing I’m very happy about is that with the Data Governance and Privacy course finished, I have some time to work on things that aren’t my next PowerPoint slideshow for a change. Like writing papers! There’s one I’ve been itching to get started on for a good while now, and it looks like now is finally the time. I’m also supposed to be working in a couple of projects besides Reboot Skills, and “no updates from me” is a phrase I’ve had to use a bit too frequently in meetings of late. Who knows – maybe there’ll be more papers to write once I’ve reminded myself what it is that I’m meant to be doing in those projects…

Mission accomplished

The mission being my university pedagogy studies. Yep, I’m now officially done – the final grade for the final part, the teaching practice, was awarded today. I know it’s just the basic studies, but it almost feels like I’ve completed a whole degree. In the concluding seminar four weeks ago, the first in-class assignment was to choose one from a set of cards with pictures of works of art on them and tell everyone else why that one; I went straight for The Garden of Death by Hugo Simberg because frankly, I was feeling pretty dead from basically being in high gear all spring, but there was also some more positive symbolism of planting and growth there. In any case, I’m not going to even consider the possibility of intermediate studies until I’ve taken a gap year.

The ethics course is more or less a wrap, although there are still a few students with some assignments missing. It’s another record year for the course, with 50 registrations and almost 30 completions, around ten more than last year. Partly because of the record numbers, I wasn’t able to keep to the formative assessment schedule I was aiming for, where each learning assignment would have been assessed before the next one is due. There were other issues with the assignments as well – the new format I tried this year was a step forward, but it’s clear that there’s still plenty of room for improvement in terms of reducing the potential gains from using generative AI as a substitute for thinking and learning.

Overall, however, I would say that the teaching practice was a success. The experiments I carried out produced useful data and experience on how to integrate AI tools in various ways into the teaching of AI ethics, and my debating chatbot experiment in particular yielded some very interesting research material. There’s a blog post coming out at some point where I discuss the teaching practice in more detail, and later hopefully also a peer-reviewed publication or two, once I’ve had the time to properly analyse the data and write up the results.

The spring in general has been a mixed bag, with some efforts successful, some not so much. I applied for two big things – a university lecturer position and a Research Council of Finland grant – neither of which I got. On the other hand, I’ve had a series of speaking engagements at various events that all went perfectly well as far as I can tell. I particularly enjoyed the most recent one, an online seminar titled Ethics of AI Hype, where I did my best to put the current generative AI boom into perspective. Truth be told, I’ll jump at any chance to talk gratuitously about the history of computing, but I do also believe that it doesn’t hurt to be reminded of the decades of AI research that took place before anyone had ever heard of such a thing as a large language model.

One event that I can describe with total confidence as a resounding success was the 45th anniversary concert of Cassiopeia. What a privilege it is to be in a choir that’s so skilled and versatile, and such a wonderful community to boot! In a single concert you may hear anything from pop hits to a Cree musical prayer to Mother Earth and from video game themes to a ten-minute-long modern composition commemorating the victims of the MS Estonia disaster. The cherry on top was that the anniversary celebrations coincided almost to the day with my own 45th birthday, so alongside the choir’s milestone, I got to celebrate a personal one in style.

The latest bit of good news (apart from the official conclusion of the pedagogy studies) came just a few days ago: a paper I submitted to this year’s Tethics conference got accepted! Should be a great experience once again; although the location has changed from Turku to Tampere, many of the same people are still involved in one way or another, so I’m looking forward to seeing plenty of familiar faces and catching up with their owners. Also accepted was a proposal for a workshop on tech ethics education, with Ville Vakkuri, Kai-Kristian Kemell, Tero Vartiainen and myself running the show, so I’ll be doing double duty this year, which I don’t mind at all. The reviewers’ suggestions for improving the paper were nothing major and the original camera-ready deadline of June 30 has been pushed back to August 11, so I think I’ll just let it be until after my vacation. The beginning of which, by the way, is barely more than a week away now!

Since when am I sought after?

Since I returned to Oulu from Dublin in 2020, I’ve been more or less systematically shifting my professional focus toward AI ethics and trying to establish a foothold in that community. Two months into 2024, it’s starting to look like those efforts are paying off in a measurable way. The following is a list of ethics-related things I’ve been invited to do since the year began:

  • Contribute to a workshop proposal for a technology ethics conference 
  • Join the programme committee of another conference with a tech ethics track 
  • Serve on the ethics board of a Horizon Europe project 
  • Work as a researcher in another EU project with an ethics aspect 
  • Give a talk on ethics and participate in a panel discussion at an AI-themed business event 

On top of all that, a journal manuscript to which I contributed by writing an ethics section was finally accepted for publication, with very minor revisions. Starting the new year with a splash!

As for what I’ve been doing at work during these past two months, three things very much dominate. First, I finished and submitted my project proposal to the Research Council of Finland, which (as per usual) is unlikely to be granted funding but did at least earn me a glass of sparkling wine and a slice of cake, courtesy of the university. Then there’s my university pedagogy studies, with the preparation of a literature review for the seminar on research-based teacherhood and a plan for my teaching practice taking a fair amount of time. The planning of the OpinTori event, where the results of the teaching practice will be presented, was also recently kicked off.

The third thing was the selection of new students for the international master’s degree programme in computer science and engineering, to which I contributed as an evaluator now for the second time. The number of applicants doubled from last year, and although the evaluation process had been streamlined, it was again, to put it nicely, something of a cathartic experience – presumably even more so for the people in charge of the whole circus. I have to admit, though, that after combing through the slew of application documents assigned to me for evaluation, there was something genuinely rewarding about interviewing the most promising candidates and encountering many who were a real delight to talk to – young, bright, confident, enthusiastic. We’ve also been promised a debriefing party, but sadly, I don’t expect that there will be anything stronger than coffee served at this one.

The next big effort is putting that plan for teaching practice into action, as the AI ethics course kicks off again on Monday the 11th. The plan revolves around the theme of AI ethics education meeting real-world AI applications: I’m going to explore various ways of using generative and conversational AI tools to support the delivery of teaching on the course, while at the same time modifying the learning assignments with the aim of making it more difficult for students to use AI tools in a counterproductive manner. Happily, the university is currently piloting the use of both Copilot for M365 and Azure AI, and I have a bunch of ideas for how they could be of service here. If all goes well, I think there’s even an opportunity to get a scientific publication out of this.

In choir news, The Magic Flute is now well and truly over after a total of 26 performances (plus dress rehearsals), every single one of them sold out. During the week leading up to the final performance I was feeling pretty tired, and I thought it would be primarily a relief to finally say goodbye to the production, but when the curtain was closed on us for the last time, I felt curiously sad after all. The emotion was even more intense the following day, when I went back to the theatre to pick up something I’d left in the dressing room. Since this is so far the only opera, or indeed theatrical production of any kind, I’ve been involved in, I don’t have anything I could meaningfully compare it to, but I got a strong feeling, still lingering, that this was something extra special. You can have too much of a good thing, though, and in retrospect, stretching it out much further would not have been a good idea. Which is not to say that I’m now done with treading the boards, if it’s up to me; apparently the next opera production here will be in 2026, Oulu’s European Capital of Culture year, and if they need tenors for the chorus – well, you just try and stop me.

Text, drugs and rock ‘n’ roll: Tethics 2023 and beyond

Well, that’s it for Tethics 2023! I find myself struggling to accept that this was only the second “proper” one I’ve attended: my first one, in 2020, was an all-online event (for obvious reasons), and in 2022 there was no Tethics because Turku was hosting Ethicomp instead. Despite all that, I want to say that I’ve been going to the conference for years, because it just feels right somehow. I suppose you could take it as a testament to the cosy and welcoming atmosphere of the conference that I feel so at home there.

Certainly there’s something to be said for a conference where you can realistically exchange at least a few words with every fellow delegate over the course of a couple of days. (Not that I ever actually do, mingling not being my strongest suit, but in principle I could have.) I’m pretty sure I’ve commented before on the cultural differences I’ve observed between technical and philosophical conferences, but it’s worth reiterating how much more rewarding it is to attend a conference when there’s a genuine and lively discussion about every presentation. Out of all the conferences I’ve ever been to, Tethics is actually a strong candidate for being closest to ideal in that besides having that culture of debate, it’s small enough that you can fit everyone in a regular-sized classroom, and there are people there representing different disciplines and sectors so you get a nice range of diverse viewpoints in the discussion.

The keynote address of the conference was delivered by Olivia Gambelin, founder and CEO of an AI ethics consulting company called Ethical Intelligence. I very much enjoyed her talk, which dealt with the differences between risk-oriented and innovation-oriented approaches to AI ethics and how it’s not about choosing one or the other but about finding the right balance between the two. I particularly liked her characterisation of the traits of ethical AI systems – fairness, transparency etc. – as AI virtues, and the idea that good AI (or indeed any good technology) should, above all, boost human virtues as opposed to capitalising on our vices. My inner cynic can’t help but wonder if there’s enough money in that for virtuous AI to become mainstream, but I’m not ready to give up on humanity just yet.

Among the regular presentations, there were also several that were somehow related to AI ethics, which I of course appreciated, since I’m always on the lookout for new ideas and perspectives in that area. However, the two that most caught my attention were actually both in the category of “now for something completely different”. On the first day, Ville Malinen spoke on the sustainability and public image of sim racing, which occupies its own little niche in the world of sports, related to but distinct from both real-world motor racing and other esports. On the second day, in the last session I was able to attend before I had to go catch my train home, J. Tuomas Harviainen presented a fascinating – as well as rather surprising – case where he and his colleagues had received a dataset of some three million posts from a dark web drug marketplace and faced the problem of how to anonymise it so that it could be safely archived in a research data repository.

Another highlight was my own paper – and I can say this with at least some degree of objectivity, since my own involvement in both the writing and the presentation was relatively small. Taylor Richmond, who was my master’s student and also worked as my assistant for a while, wrote the manuscript at my suggestion, based on the research she did for her M.Sc. thesis. She then got and accepted a job offer from industry, and I figured that it would be up to me to present the paper at the conference, but to my delight and surprise, she insisted on going there to present it herself, even at her own expense. I offered some advice on how to prepare the presentation and some feedback on her slides, but all of the real work was done by her, leaving me free to enjoy the most low-stress conference I’ve ever attended.

The paper itself explores content feed swapping as a potential way of mitigating the harmful effects of filter bubbles on social media platforms. Taylor proposed a concept where a user can click a button to temporarily switch to seeing the feed of the user with the least similar preferences to theirs, exposing them to a radically different view of the world. To test the concept, she carried out an experiment where ten volunteers spent some time browsing a simulated social media platform and answered a survey. The results showed that the feed swap increased the users’ awareness of bias without having a negative impact on their engagement, the latter being a rather crucial consideration if real-world social media companies are to even consider adding such a functionality to their applications. Despite some obvious limitations, it was a seriously impressive effort, as noted by several conference delegates besides me: she designed the experiment, created the social media simulation and analysed the data all by herself, and she did a fine job with the presentation as well. My own contribution, apart from my supervisory role, was basically that I wrote some framing text to help sell the subject matter of the paper to the tech ethics crowd.

Also on the agenda this year was a special session on the future of the Tethics conference. The Future Ethics research group at the Turku School of Economics, which has organised every event so far, is apparently not in a position to commit to doing it again next year, so there was a discussion on finding an alternative host, with Tampere University emerging as the most likely candidate. As much as I’ve enjoyed all of my visits to Turku, I’d certainly appreciate the two hours that this would slice off my one-way travel time! There was also some talk about possibly going more international – attracting more participants from outside the Nordic countries, perhaps hosting the conference outside Finland at some point in the future – but there was a general consensus that in any case the event should remain relatively small and affordable to retain its essence. Personally, I quite like the idea that Oulu could be the host some year, although I don’t know how many others there are here who’d be on board with that.

In the meantime, my top two professional priorities right now are getting more focused on research (with a whole bunch of distractions now happily out of the way) and finishing my university pedagogy studies. It might seem like these are more or less diametrically opposed to one another, but thankfully that’s not the case: I can see potential in both of the remaining courses – teaching practice and research-based teacherhood – for advancing my research interests as well as my pedagogical knowledge. I have a couple of journal manuscripts in the works, one recently submitted and the other undergoing revisions, and I’m involved in a cybersecurity-themed research project where I’ve been looking into AI vulnerabilities from an AI ethics perspective. I’m sure the next distraction is waiting to pounce on me just around the corner, but until it does, I’m going to indulge myself and pretend that I have no work duties other than thinking deep thoughts and making sense of the world.

As usual, there are things happening on the music front as well. The choir currently has its sights set firmly on two big Christmastime projects, but there’s been time for a variety of smaller performances too; a particularly memorable occasion was singing Sogno di Volare, the theme song of the video game Civilization VI, as the recessional music at the wedding ceremony of two choir members. Next year we’ll have the choir’s own 45th anniversary celebrations – and, of course, the new run of The Magic Flute! The first music rehearsal for the latter is scheduled to take place just a couple of weeks from now. Will be interesting to see how much of the music we can still remember, although the real challenge will come in December when we start relearning the choreographies… 

Pictures and sounds

A new cinema club kicked off at the university yesterday with a screening of the 2014 film Ex Machina, written and directed by Alex Garland. Domhnall Gleeson stars as Caleb, a programmer working for a company called Blue Book – basically a stand-in for Google – who wins a competition and gets invited to spend a week with Nathan (Oscar Isaac), the company CEO, at his place in the mountains. Soon after Caleb’s arrival, it turns out that the real reason for him being there is Ava (Alicia Vikander), a revolutionary humanoid robot Nathan’s been developing in secrecy. Nathan wants Caleb to subject Ava to the ultimate version of the Turing Test: interact with her to determine if she’s truly intelligent, sentient and self-aware on a human level.

I was initially a little bit annoyed at how the film exaggerates the significance of the Turing Test, as if there is some kind of fundamental qualitative distinction between an entity that beats the test and one that doesn’t, but that soon stopped bothering me after the film moved on to more interesting things. The usual annoyances related to the representation of technology in mainstream cinema are also there – empty technobabble, Hollywood hacking – but these are kept to a minimum and equally easy to forgive. At one point Nathan stops Caleb when the latter is trying to ask technical questions about Ava’s AI, which I felt was the author speaking to the audience as much as Nathan to Caleb: never mind how it’s supposed to work, we’re here to talk philosophy.

Such petty complaints were certainly not enough to prevent me from thoroughly enjoying the movie, and I must say I’m rather surprised I hadn’t seen it before or even been aware it existed. The discussion afterward was highly stimulating as well; because of my interest in AI ethics, I’d been invited to join it in the capacity of moderator, but this was more of a nominal role and what I really did was give my views on a couple of questions from the organisers to get the conversation started. All the big philosophical issues related to AI came up – the nature of consciousness, rights of artificial entities, AI alignment, the singularity, AI as an existential threat. Time well spent! The club nights are always on a Thursday, which I ordinarily keep reserved for band rehearsal, but I like the concept and there are interesting films coming up (including one I haven’t seen before), so I’m tempted to go again.

Meanwhile in the world of non-fictional AI, I’ve managed to keep myself appropriately busy for the past few weeks that I’ve been back at work, largely thanks to my AI ethics course and various things derived from it: analysing the course feedback from last spring, giving some lectures for a summer school in learning analytics, finishing two online courses due to be launched soon. The feedback was particularly nice this time – every student who answered the survey gave the course the highest possible overall grade, and in general there was a clear shift towards more favourable answers from last year. Granted, there were only six responses, but that’s still a third of all the students who completed the course this year, and both the completion rate and the absolute number of students who completed were the highest so far. Combined with my personal experience, it all makes me quite confident that I’m headed in the right direction with the course.

The choir is also back from its summer break, with a new musical director. We had our first rehearsal of the new term last week, and there are several small performances coming up already in the next couple of weeks, although I’m going to miss most of them because I’ll be away on a trip. During the summer some of us (myself included) participated in the creation of Owla, a new installation by sound artist Jaakko Autio, and a most interesting and rewarding experience it was. First we rehearsed and recorded a piece of music composed specifically for this occasion by a friend of the artist, and during the process Jaakko captured not just the music but also all the chatter in between takes. Once we were happy with the song, we sat down, still mic’d up, and Jaakko asked us some interview questions, had us introduce each other and finally just breathe for a few minutes. All of this audio became raw material for the installation, which opened at the Oulu Museum of Art on Wednesday, so go check it out if you’re in town!