After 20 performances, the time inevitably came to say goodbye to Ovllá. What a ride that was! Not quite as big a hit as The Magic Flute, but we had several fully sold-out shows on the weekends and perfectly respectable-sized audiences for the midweek shows as well. The newspaper Kaleva reported this week that nearly 10 000 people saw the opera, with 85% of all available seats booked – not bad at all! Both shows on the closing weekend were among the sold-out ones, and for the very last one we got probably the most enthusiastic crowd of the entire run; each of our big chorus scenes got its own round of applause, even the Kiruna scene in the first act, which had never happened before. Couldn’t have wished for a better finish.
Now that it’s over, I’m feeling pretty much as I expected to feel: there’s great satisfaction for a job well done, pride even, but underneath it is a profound sadness for having no more shows to look forward to and for having parted with some people possibly for the final time in my life. It’s not exactly the Fellowship of the Ring splitting up, but we built something beautiful and important together and became a community in the process. I didn’t even interact with some of the Sámi soloists and other guest artists all that much, but it’s still rather poignant not knowing if I’ll ever see them again.
I also find myself wondering what will happen to the opera now, after the inaugural production. There’s been no indication of any plans for a second run of shows in Oulu, but will other opera houses or festivals pick it up now, perhaps in another Nordic country? In my admittedly biased opinion, it’s a work that really deserves to be staged again, so it would be tremendously sad if it were to be simply forgotten. I’d certainly be willing to travel quite far to see it performed somewhere else. I mean, it would be worth it even just for the chance to tell everyone there who’ll listen that I was on stage in the world premiere!
The empty space left by the end of Ovllá is not just an emotional one; in more concrete terms, it shows up in my calendar as well, with Friday evening this week being the first one in 2026 when I have no performance, no rehearsal, no engagement of any sort. To some extent, this situation is compensated for by the rehearsals for MASS being now in full swing, but for the next month or so it’s only two hours once per week and the weekends are all free. Once we start rehearsing together with all the performers – 220 of them! – it will be a very busy couple of weeks, but then that, too, will be all over.
Meanwhile at the university, there’s a rather exciting new development: we have a new employee! That in itself is, of course, hardly anything worth making a fuss about, but for me it’s very much a big deal because the employee in question is my new doctoral student, with whom I’ll be doing research on AI vulnerabilities for at least the next three years. It’s a school holiday week in northern Finland and many of my colleagues are taking some time off, including the co-PI of the project she was hired for, but I met up with her to welcome her into the house and get her started with her onboarding. Next week there will be more people on campus for her to meet and we can start doing some actual scientific work.
Besides the selection of our new researcher, the selection of students for international master’s programmes was completed in February as per usual. I was again a member in the evaluation team for the computer science and engineering programme. As was to be expected, the number of applications went down significantly now that the university is no longer allowed to subsidise the education of non-EEA students through blanket scholarship offerings, although the drop was not as sharp as last year when the application fee was introduced. What did surprise me somewhat was that this did not seem to have much of an impact on where the applications come from; I thought there would have been a noticeable shift towards countries whose citizens are exempt from paying the tuition fees, but based on the applications I reviewed, this did not seem to be the case.
Next week it’s time to put on my lecturer’s hat once again as the next edition of the AI ethics course gets rolling. I’m keeping the same course format as before, but there are some changes in the guest line-up, as well as some updates to the materials. There are a couple of topics in particular that I want to cover in more depth this year, both of them having to do with generative AI. One is how to use GenAI responsibly for the course assignments – it seems that no matter how many different ways I try to communicate the AI policy to the students, I will never achieve saturation, and besides, I don’t want to just tell them the rules but also give them some positive examples of what they can do within those rules.
The other topic I’ve been immersing myself in just this week: legal issues concerning generative AI, especially those on which there is already some existing case law. Mainly this has to do with copyright issues, such as the question of whether training AI models with copyrighted works constitutes fair use as defined in US copyright law – if not, some big companies could find themselves in some big trouble. However, there is also at least one case dealing with free speech, specifically whether the maker of a chatbot can invoke it as a defence to avoid liability for a wrongful death allegedly caused by the product. The judge in this case did not buy the argument, and in an earlier one, it was ruled that TikTok can’t claim Section 230 immunity for a fatal incident because of the active role played by the platform’s recommendation algorithm. It’s probably too early to say anything conclusive, but perhaps freedom of expression has its legal limits when it comes to AI, even in America.
It certainly seems that some kind of limits are in order; just today I read yet another story about someone whose interactions with a chatbot led to them taking their own life. Besides being extremely tragic in their own right, such cases are but the tip of the iceberg when it comes to harm resulting from the way people interact with these new AI systems. The interaction aspect is, I feel, an underexplored one in AI safety research – given how complex and versatile the systems are, there’s only so much that can be done to make them safe and reliable without considering the relationship between the system and the user. As it happens, our new hire has a background in human-computer interaction, so even though we didn’t get nearly as much funding for the project as we were hoping for, I think we have a real chance to do some good here. Evil megacorporations beware!