November is done, and with that, the last of my speaking engagements for this year. The Tethics conference was once again highly enjoyable, although I have to say I would have preferred not to take the night train to get there; even the absolute best-case scenario was getting six hours of sleep that night, and the reality was probably closer to half of that amount. I could have taken a morning train instead and skipped the beginning of the conference, but there were several AI-related papers scheduled to be presented in the morning sessions and I didn’t want to miss those, so I decided to just bite the bullet and suffer a night of inadequate sleep to catch them.
As it turned out, one of those morning presentations got cancelled, and in its place, the organisers had decided to have an impromptu roundtable on the immediate and not-so-immediate future of the conference. Regarding the former, it was decided that next year’s conference will be hosted by the University of Vaasa – a city I’ve never visited as far as I can recall, so it should be a nice change of scenery. The more general conclusions were largely the same as those of a similar discussion last year: the conference growing bigger and more international is a good thing, as long as it remains true to its original ideals. There was also a consensus that different universities taking turns organising the conference is a good idea, and that a steering committee of Tethics veterans should be formed to provide guidance and support.
After the lunch break it was time for John Danaher’s keynote titled “Do technologies disrupt moral paradigms?”, in which he looked at societal transformations induced / catalysed by technological breakthroughs such as the invention of the cannon. I found the talk highly enjoyable, although the effects of sleep deprivation were starting to get to me, so I wasn’t able to concentrate as fully as I would have liked. My own talk was in the session immediately after the keynote and went smoothly, with the lively follow-up discussion that I’ve come to expect from ethics conferences. I’ll post a summary of the paper later, once the proceedings have been published, but in a nutshell, it looks at how the concept of security is viewed by the AI ethics community (as opposed to the traditional cybersecurity community) and carries out a survey of AI incidents to get an idea of the real-world impact of security vulnerabilities in AI systems.
On the second day of the conference, I decided to sleep in and skip the first session; one badly slept night I can take, but not two in a row if I can help it, and after the conference dinner followed by drinks in a pub the night was pretty much ruined to begin with, even though I didn’t stay out very late and kept my alcohol consumption very moderate. Therefore I took my time to get up and have breakfast at the hotel before hauling myself to the conference in time for Anna Metsäranta’s keynote on “Sustainable AI – from principles to practice”. It was good to have someone from industry to shed light on how things are being done out there in the real world, so this was another highlight for me.
In the last session before the closing of the conference, I participated in the running of a workshop with the lofty title “The current state and future of technology ethics education in Finland”. To be quite honest, most of the work was done by Ville Vakkuri and Kai-Kristian Kemell and my own contribution was rather modest, but nevertheless, it was interesting to have this opportunity to share thoughts on this topic and to get ideas for enhancing the computer science and engineering curriculum in Oulu from the perspective of ethics. The question of timing is a particularly interesting one: when should ethics education be offered? At the very beginning of their studies, the students are perhaps not yet ready to absorb that kind of knowledge, but if we wait until after they’ve finished their bachelor’s studies, it may be too late already. Not everyone needs to be an ethics expert, of course, but I do believe that everyone should be exposed to enough ethics content during their studies to normalise the idea that awareness of ethics is part of what makes a good engineer.
Fast-forward about three weeks and I’m in Helsinki, on the island of Santahamina, in the auditorium building of the Finnish National Defence University for the annual seminar on the art of cyber warfare. Instead of an auditorium, the seminar took place inside a small studio set up with a green screen and a webcasting rig; initially, it felt somewhat silly to have travelled all the way there just to stream my presentation, but in the interest of making sure everything runs smoothly, it made perfect sense. Besides, it made the whole thing look a great deal more professional than having each speaker join from their home / office / wherever. My colleague Kimmo Halunen served as moderator, introducing the speakers and relaying audience questions submitted via chat.
The theme of this year’s seminar was AI on the battlefield, and I had been invited to speak on this theme with my AI ethicist hat on. Since I spend a fair amount of time discussing the ethics of autonomous weapons in one of the lectures of my AI ethics course, I decided to build on that and it worked out quite nicely. Somebody told me that there were close to 500 people online for the stream during my talk, and the feedback I’ve heard seems to indicate that it was well received. I’ve already been invited to contribute in some capacity to a couple of dissertations on autonomous weapons, which I’m taking as a sign that I made a positive impression and managed to get some actual successful networking done. The entire seminar (in Finnish) is available to view on YouTube, with my talk starting about 44 minutes in.
Now that I’m apparently finished with the speaking circuit for 2024, it’s a good time to reflect a bit. Based on my experience, I would say that I’m actually quite adaptable and versatile, capable of dealing effectively with a variety of audiences, but where I’m at my best – and what I also enjoy the most – is academic seminars. It’s like taking the best of both worlds from lectures and conference presentations: instead of being limited to the scope of a single paper, I get to draw broadly on my expertise and interests to prepare my talk, but I still get to speak primarily as a researcher rather than a teacher, so I can be more relaxed when it comes to the pedagogical aspect. I feel like I can really express myself within those parameters, and it’s always a delight to discover new avenues for that.
Speaking of self-expression, A Christmas Carol has been running for about a month now and is off to a very strong start: the reviews I’ve seen have been highly positive, and all 2024 performances have been sold out for a good while now. 2025 is very much a different matter, and I suppose it’s not surprising that people are much keener to see the play before Christmas than after, but hopefully they won’t lose interest altogether if they didn’t manage to get tickets for before. It’s been great so far, but I suspect that we’re all going to be sick of carols by February, and it certainly won’t help if we’re singing them to an empty house. The demands of the play have been such that I’ve had to prioritise theatre over choir rather heavily, but I’ve managed to squeeze in just enough rehearsal time with Cassiopeia to sing in our Christmas concerts without embarrassing myself, so art-wise, it’s been quite a productive end of the year!
Christmas itself is just a couple of weeks away, so this is in all likelihood my last post of the year. As I’m writing this, I don’t yet have an employment contract for the coming year, but that’s hardly anything out of the ordinary and I expect it will be sorted out soon. If it’s not – well then, get in touch if you need someone to play some music or to give a talk on AI and I’ll get back to you with a quote, I guess?