Still alive

I am indeed! Barely, but still. Once again blogging has been forced to take a back seat, but I thought I should do one more post before my vacation – which, happily, is right around the corner. No big deadlines before that, just some exam marking plus a bunch of writing that I can pick up from where I left off when I come back to work in August. Next week will be more like a half week because of the faculty’s staff summer party and the Midsummer weekend, and after that there’s just one week of work left before I’m free. Seems too good to be true! 

The AI ethics course is happily finished by now: lectures given, assignments evaluated, grades entered into Peppi. Again, it was a lot of work, but also rewarding and enjoyable. There are always at least a couple of students who really shine, turning in excellent assignment submission after another, and those alone are enough to make it all worthwhile. However, a big part of the enjoyment is also that I can use the course as a test lab of sorts, changing things a bit and trying something new each time, seeing what works and what doesn’t. This time I made some changes to the assessment criteria and practices, which seemed to work, so I think I’ll continue in the same direction next year with the teaching development project that I need to do as part of my university pedagogy studies. 

Of course, there’s always new things happening in the world of AI, so the course contents also need some updating each year. This spring, for obvious reasons, the ethical implications of generative AI tools kept popping up under various course themes, and I also encouraged the students to try ChatGPT or some other such tool at least once to generate text for their assignment submissions. There were certain rules, of course: I told the students that they must document their use of AI, critically examine the AI outputs and take responsibility for everything they submit, including any factual errors or other flaws in AI-generated text. The results of the experiment were a bit of a mixed bag, but at any rate there were some lessons learned, for myself and hopefully for the students as well. If you won’t trust students to use AI ethically on an AI ethics course, where then? 

The most recent big news related to AI ethics is that the European Parliament voted this week to adopt its position on the upcoming AI Act, so the regulation is moving forward and it may well be that on next year’s course we will be able to tell the students what it looks like in its final form. The parliament appears to have made some substantial changes to the bill, expanding the lists of prohibited and high-risk applications and specifying obligations for general-purpose AI systems while making exemptions for R&D so as not to stifle innovation. It will be extremely interesting to see what the impact of the act will be – on AI development and use, of course, but also on AI regulation elsewhere in the world, since this is very much a pioneering effort globally. 

After my summer holiday I’ll need to hit the ground running, because I’m once again giving some AI ethics lectures as part of a learning analytics summer school. A new thing this year is that I’m also preparing an ethics module for a new Master’s programme in sustainable autonomous systems, a collaboration between my university and the University of Vaasa. I don’t mind the new challenge at all – I took it upon myself more or less voluntarily, after all – but it does mean that my job title is increasingly at odds with what I actually do. Still, I’ve managed to fit in some research as well, and starting in the autumn I’ll even be participating in a proper research project for a change.

One of the highlights of the spring is that I got a paper accepted to Tethics 2023 – or rather, I supervised a student who got a paper accepted, which feels at least as rewarding as if I’d done the research myself, if not more so. In any case, it looks like I’ll be visiting Turku for an ethics conference for the third year running, and I really wouldn’t mind if this became a tradition! I’m even looking forward to the networking aspect, which I’m usually pretty bad at. Somehow ethics conference are different and Tethics especially – partially because it’s so small, I suppose, but perhaps also because these people are my tribe? 

Musically, the spring term was very successful. After The Magic Flute we appeared in two concerts with Oulu Sinfonia – one of them sold out – performing music by the late great Ennio Morricone. Sadly, we then parted ways with our musical director of many years, which forced some planned events to be cancelled / postponed / scaled down, but everyone seems determined to keep the motor running and overall I feel pretty good about the future of the choir. There will be some big things happening late this year and early the next, including (but not limited to) another run of the opera in January and February. Three out of eleven shows are sold out already, so if you missed it this year, get your ticket now! 

The final curtain

Happy 2023, I guess? I know it’s a bit ridiculous to be wishing that when we’re more than halfway into February already, but it is my first blog post of the year – I checked. In my defence, the beginning of the year has been pretty much exactly as intense as I feared it would be, with me trying my best to balance between my commitments to the university and the theatre. The first week of January was the absolute worst: I returned to work immediately after New Year, and that week we had rehearsals every night from Monday to Thursday. I was still suffering from the problem of sleeping badly after them, so the inevitable result was me being utterly knackered by Friday, which fortunately was a bank holiday, giving me a chance to recover before two more rehearsals on Saturday.

The following week we had dress rehearsals from Monday to Wednesday, Thursday night off and then the first two performances on Friday and Saturday. In terms of effort, it was hardly any easier than the previous week, but the thrill of the opening night more than made up for it all. After the first show we celebrated with some bubbly and they even gave flowers to all of us chorus members; sadly, mine suffered rather heavy damage on the way home, which involved a pit stop in a crowded bar that I ended up leaving before I even had a chance to order myself a drink, but I was able to salvage the essential part of the poor abused plant and keep it looking nice for a good week.

After opening week, things got considerably less hectic, since there were no more rehearsals, just performances – first three per week, then down to two for the last couple of weeks. This weekend’s the final one, so around 4pm on Saturday the curtain will close on our production of The Magic Flute for the last time. All 15 performances sold out, and all the reviews I’ve seen have been very positive, so I guess it’s safe to say we’ve had a successful run! It’s been a wonderful experience for me personally as well, but I can’t deny that toward the end it has begun to feel more and more like work that I’m not getting paid for and that has made me put my other hobbies (not to mention my social life) largely on hold for quite a while. I’m very much looking forward to next Friday and my first commitment-free weekend of the year.

The big thing at work right now is evaluating applications to international M.Sc. degree programmes. This is the first time I’m involved in the process, and boy is it a trudge and a half. Sure, it’s interesting to get a sneak peek at some of the new students who may be joining us from around the world next autumn, but the work itself is first tedious, crawling through the mass of application documents to identify the most promising candidates, and then stress-inducing, doing interviews with each of them. I recently had a chat about this with a friend of mine who’s been in the IT consulting business for many years and interviewed his share of job applicants, and he said he finds interviews stressful because he can tell that the other person is nervous, so then he empathises with them and starts to feel their discomfort. Me being me, I get stressed about talking to new people even without that extra factor, so I’m going to be extremely glad once I’m done with my share of the interviews.

Something that’s turned out to be a blessing here is the Bookings app in Microsoft 365. This has been very helpful in scheduling the interviews: you just specify the times when you are available, make sure your calendar is up to date with your other appointments so you don’t get double bookings, and then send a link to the booking page to the people you want to invite and let them pick a time that works for them. Apparently in the past this has been done by tentatively selecting a date and time for each candidate, emailing it to them and asking them to email back with suggestions if the proposed time doesn’t suit them; I certainly don’t relish the idea of having that kind of administrative overhead on top of the actual evaluation work, even though it might have helped get the interviews spaced out more evenly and efficiently.

As usual, there’s no need to worry about running out of work to do in the spring either: the start of period IV is just three full weeks away, and with that comes the start of another run of the AI ethics course. I’ll count myself lucky if it doesn’t take up even more of my time than before; I’m the sole responsible teacher now, but on the other hand I will have a teaching assistant, and I also have some ideas for streamlining the evaluation of course assignments to make it less of a burden. Another thing to think about is my stance on ChatGPT and its ilk; certainly I’m going to discuss the technology and its implications in my lectures, but I’ll also need to decide what to do about the possibility of students using it to generate text for their assignment submissions. I’m leaning toward embracing it rather than discouraging or outright banning it – I don’t know how I’d enforce such a ban anyway – but if I go there, it’s not exactly trivial to come up with assignments that give everyone an equal opportunity to exploit the technology and demonstrate their learning to me.

“It belongs in a museum!”

After a three-week summer holiday, I returned to work last Monday. I say “returned to work”, but what I actually did was hop on a train and travel to Turku to attend the Ethicomp 2022 conference at the School of Economics. After two and a half days of hard conferencing, I departed for Oulu on Thursday afternoon, leaving only Friday as a “normal” workday before the weekend. I can imagine, and have in fact experienced, much worse ways to come back after a vacation! 

I felt more anxious than usual about my own presentation, scheduled for late afternoon on the first day. This was partially because I like to prepare and rehearse my presentations well in advance, but this time I hadn’t had time to finish my slides before my vacation nor an inclination to work on them during it, so I more or less put my deck together on the train and then rehearsed the talk in my hotel room. On Tuesday I skipped the session immediately before mine to flick through my slides a few more times and make some last-minute tweaks, and I eventually emerged from my mental cocoon reasonably confident that I would get through the whole thing without stumbling. 

I still wasn’t that confident about how the presentation would be received, because the paper I was presenting is probably the strangest one I’ve written to date. Long story short, one day I was preparing materials for the introductory lecture of the AI ethics course and explaining the concepts of moral agency (the status of having moral obligations) and patiency (the status of being the subject of moral concerns). Artificial things are traditionally excluded from both categories, but there is an ongoing debate in philosophy of AI about whether a sufficiently advanced AI system could qualify as a moral agent and/or patient. 

The idea that struck me was that if we let go of (organic) life as an analogy and view AI systems as cultural artifacts instead, we can sidestep the whole debate on whether AI can become sentient/conscious/whatever and make the moral patiency question a good deal more relevant to practical AI ethics in the here and now. After all, many people feel sad when an artifact of great cultural significance is destroyed (think Notre-Dame de Paris), and downright outraged if the destruction is wilful (think the Buddhas of Bamiyan), so it doesn’t seem too much of a stretch to argue that such artifacts have at least something closely related to moral patiency. Could an AI system also qualify as such an artifact? I filed the question in my brain under “ideas to come back to at an opportune moment”. 

The moment came in January: I wasn’t terribly busy with anything else right after the holidays, Ethicomp had a call for papers open and I only needed to write a 1500-word extended abstract to pitch my idea. I did wonder if it might be a bit too outlandish, which in retrospect was silly of me, I suppose – philosophers love outlandish ideas! The reviews were in fact fairly enthusiastic, and in the end my presentation at the conference was also well received. I was able to have some fun with it even, which is not something I often manage with my conference talks, and I soon got over my nagging feeling of being an impostor, a lowly computer scientist who arrogantly thinks he’s qualified to talk philosophy. 

In retrospect, I also have to say I did manage to turn that extended abstract into a pretty well written full paper! It’s not officially published yet, but it argues that 1) yes, AI systems can be artifacts of considerable cultural significance and therefore intrinsically worthy of preservation, 2) they constitute a category of artifact that cannot be subsumed under a broader category without losing essential information about their special nature, and 3) this special nature should be taken into account when deciding how to preserve them. The argumentation is fairly informal, relying largely on intuition and analogy, but I’m quite proud of the way it’s built and presented nonetheless. Sure, the paper is only tangentially related to my daily work and is likely to be a total one-off, but even the one-offs can sometimes have a bigger impact than you’d expect – there’s another one of mine, also an ethics paper, that was published 15 years ago but is still getting citations. 

Apart from surviving my own presentation, for me the highlight of the first day, and indeed the whole conference, was the keynote Scaling Responsible Innovation by Johnny Søraker. I’d met Johnny before on a couple of occasions, originally at the ECAP 2006 conference in Trondheim where he was one of the organisers, but hadn’t seen him for ages. Turns out he’s now working as an AI ethicist for Google, which the more cynically minded among us might remark sounds like a contradiction in terms, but be that as it may, he gave an insightful and entertaining talk on the challenges faced by SMEs wanting to do responsible innovation and how they can address those challenges. I particularly liked the idea of having an “interrupt”: someone who is kept informed of everything going on in the company and has been trained to spot potential ethics issues. The obvious advantage is that it doesn’t matter how convoluted or ad-hoc the innovation process is – as long as there is this one node through which everything passes at some point, risks can be identified at that point and brought to the attention of someone qualified to make decisions on how to mitigate them. 

Among the regular presentations there were several AI-related ones that I found very interesting. The one that resonated with me the most was Sara Blanco’s talk, in which she criticised what might be called a naive, “one-size-fits-all” conception of AI explainability and argued for a more nuanced one that acknowledges the need to account for differences in background knowledge and prior beliefs in the formulation of explanations. In light of my recent exposure to constructivist theories of learning, which likewise emphasise the effect of the learner’s existing knowledge structures on the process of integrating new knowledge into those structures, this made a great deal of sense to me. Outside the realm of AI, I very much enjoyed Reuben Kirkham’s talk on the impact on academic freedom of the unusual relationship between academia and industry in computer science, as well as Michael Kirkpatrick’s on the problematic nature of direct-to-consumer genomic testing services such as 23andMe, something I’ve brought up myself in my data ethics lectures. 

The social programme was top notch too. On Wednesday evening we were first treated to a glass of sparkling and some live classical music at the Sibelius Museum, where we had about an hour to roam and explore the collections, which even included some instruments for visitors to try out – I couldn’t resist having a go on the Hammond organ, of course. After this we enjoyed a very tasty three-course dinner, with more live music, at restaurant Grädda next door. From the restaurant we proceeded to a pub for more drinks and chats, and when the pub closed, some of my fellow delegates went to find another one to have a nightcap in, but by that point I was quite ready for bed myself so I headed straight to my hotel. 

This was my first Ethicomp conference, but I certainly hope it wasn’t my last. I’ve always found philosophy conferences highly stimulating, as well as welcoming to people of diverse academic backgrounds, so despite my anxieties, me not being a “proper” philosopher has never been a real issue. After CEPE 2009 I more or less lost touch with the tech ethics community for a whole decade, but recently I’ve been sort of working my way back in: first there was the special session at IEEE CEC 2019, then Tethics 2021, and now this. Ethicomp in particular is apparently the one that everyone in the ethics of computing community wants to go to, and having now been there myself, I can see why. The next one will be in 2024, so I guess I have about a year and a half to come up with another weird-but-compelling idea? 

That’s a wrap, folks

A paper I wrote with Alan Smeaton, titled “Privacy-aware sharing and collaborative analysis of personal wellness data: Process model, domain ontology, software system and user trial”, is now published in PLOS ONE. In all likelihood, this will be the last scientific publication to come out of the results of my MSCA fellowship in Dublin, so I’m going to take the risk of sounding overly dramatic and say it kind of feels like the end of an era. It took a while to get the thing published, but with all the more reason it feels good to be finally able to put a bow on that project and move on to other things.

So what’s next? More papers, of course – always more papers. As a matter of fact, the same week that I got the notification of acceptance for the PLOS ONE paper, I also got one for my submission to Ethicomp 2022. As seems to be the procedure in many ethics conferences, the paper was accepted based on an extended abstract and the full paper won’t be peer-reviewed, so as a research merit, this isn’t exactly in the same league as a refereed journal paper. However, since the conference is in Finland, I figured that the expenditure would be justifiable and decided to take this opportunity to pitch an idea I’d been toying with in my head for some time. 

To be quite honest, this was probably the only way I was ever going to write a paper on that idea, since what I have right now is just that: an idea, not the outcome of a serious research effort but simply something I thought might spark an interesting discussion. Since I only needed to write an extended abstract for review purposes, I could propose the idea without a big initial investment of time and effort, so it wouldn’t have been a huge loss if the reviewers had rejected it as altogether too silly, which I was half expecting to happen. However, the reviewers turned out to agree that the idea would be worth discussing, so Turku, here I come again! That’s the beauty of philosophy conferences  in my experience – they’re genuinely a forum for discussion, and I’ve never felt excluded despite being more of a computer scientist/engineer myself, which I presume has a lot to do with the fact that philosophers love to get fresh perspectives on things. 

The idea itself is basically an out-of-the-box take on the notion of moral patiency of AI systems, and I will talk about it in more detail in another post, probably after the conference. Meanwhile, a follow-up to our Tethics 2021 paper on teaching AI ethics is at the planning stage, and I have the idea for yet another AI ethics paper brewing in my head. Since I returned to Finland and especially since I started working on the AI ethics course, I’ve been trying to raise my profile in this area, and I have to say I’m fairly pleased at how this is turning out. Recently I had a preliminary discussion with my supervisor about applying for a Title of Docent with AI and data ethics as my field of specialisation, although I haven’t actually started preparing my application yet. 

The AI ethics course is now past the halfway point in terms of lecturing, and my own lectures are all done. I started this year’s course with my head full of new ideas from the university pedagogy course I recently completed, and some of them I’ve been able to put to good use, while others have not been so successful. I’ve been trying to encourage the students to participate more during lectures instead of just passively listening, and low-threshold activities such as quick polls seem to work pretty well, but my grand idea of devoting an entire teaching session to a formal debate met with a disappointing response. I don’t very much like the idea of forcing the students to do things they’re not motivated to do or don’t feel comfortable with, but I also don’t have a magic trick for enticing the students out of their comfort zone, so I’m not sure what to do here. I suppose I could settle for the small victories I did manage to win, but I still think that the students would really benefit from an exercise where they have to interact with one another and possibly adopt a position they don’t agree with. Oh well, I have another year now to come up with new ideas for them to shoot down. 

Meanwhile, in the choir things are getting fairly intense, with three rehearsal weekends over the past four weeks, two for the whole choir and one for just the tenor section – although to be quite honest, during the latter we sang a grand total of one of the songs included in the set of the spring concert. We also have performances coming up on May Day and in the university’s Doctoral Conferment Ceremonies on the 28th of May, so there’s a lot of material to go through over the next month and a half. Immediately after the March reheasal weekend I tested positive in a COVID home test, so the dreaded bug finally caught up with me, something I’d been expecting for a while actually. It was a mild case, but still unpleasant enough that I wouldn’t fancy finding out what sort of experience it would be without the vaccine. 

While on the subject of music, I can’t resist mentioning that I signed up to sing in the chorus in a production of The Magic Flute in January-February next year! That’s a first for me – I’ve been in the audience for plenty of operas, but never on the stage. I’m slightly dreading the amount of time and effort this will require, but in the end I just couldn’t pass up the opportunity. There is still the caveat that if there are more people eager to sing than there are open positions, we may have to audition, but an oversupply of tenors is not a problem that frequently occurs in the choral world. The rehearsal period won’t start until much later in the year, but I’m already a little bit excited at the prospect! 

Слава Україні

…yeah. So. This post is going to be rather different from what I usually write about. I certainly didn’t expect when I started the blog that I’d end up covering stuff like this one day, but the plight of Ukraine is making it hard to concentrate on other things, so I may as well try and channel that anxiety into something productive. 

I won’t pretend to be even remotely qualified to make sense of all the information going around about how the Russian invasion is progressing, so what I can say with reasonable confidence basically amounts to “things are bad, but not as bad as they could be”. Among the more qualified, there seems to be a consensus that whatever the attackers have gained so far, it’s not as much as they expected and has cost them more than they expected. I can’t say I’m terribly optimistic about the eventual outcome of the war – Russia has plenty more resources to throw at Ukraine I’m sure – but it is heartening to see the Ukrainians fight back with such grim determination and the rest of Europe rally to the cause with such enthusiasm. Big protests everywhere, even in Russia where participating in one is a good way to land in jail. 

There were two pro-Ukraine demonstrations here in Oulu during the past weekend, a smaller one with a few dozen participants on Saturday and a bigger one with several hundred on Sunday. I attended both, although I left the Saturday one pretty soon after arriving because I wasn’t really dressed for it and started to freeze my toes off. Even without the physical discomfort, the pleas of the local Ukrainian community weren’t easy to listen to as the speakers struggled to make words come out instead of sobs. As I walked away, I was very much aware of how privileged I was to be able to go to a cosy pub to get my feet warm and enjoy a pint without being in constant fear of news that a family member or friend has been killed. 

It’s not just protests either, but imposing huge economic sanctions on the aggressors and supplying the defenders with weapons and intel. It’s frankly amazing how easy it ultimately was to get the entire European Union behind the package; even if you don’t factor in Russian efforts to sow discord among the member states, normally you’d expect it to take ages to get everyone to agree on something of this magnitude, but somehow we went from “endless internal bickering” to “united against a common enemy” in a matter of days. Even Switzerland has broken with its tradition of neutrality, and my own country decided yesterday to go against an established policy of not exporting weapons to conflict zones. Call me naïve, but I doubt this is something the Kremlin was counting on to happen when the invasion was launched. 

To continue my layman speculation, while I fear that Ukraine may eventually be forced to capitulate, I’m not so sure that this will be more than a Pyrrhic victory for Putin. If the objectives of the “special military operation” are taken, what does that achieve in the long run? Is this supposed to persuade Ukraine to return to the fold of Mother Russia like a prodigal son, as the propaganda suggests? Good luck trying, with a crippled economy, to control a nation of 40+ million people who 1) are evidently full of fighting spirit, and 2) hate your guts for what you’ve done to them.

The list of responses to the invasion goes on and on; one of the more creative ones I’ve heard of is dog walkers in Helsinki picking up their pets’ waste and chucking it onto the grounds of the Russian embassy. Boycotts and condemnations have been announced in various fields of business, sports, culture… Academia, too: I’m pleased to report that my university has joined all other Finnish universities in supporting Ukraine and condemning Russia’s actions. The open letter signed by thousands of Russian scientists and science journalists opposed to the invasion is also very welcome, but even so, I don’t see how I could, under the circumstances, have any involvement in a scientific conference taking place in Russia or Belarus, for example. 

Meanwhile, I do need to do also the part of my job that involves talking about things I actually know about. The second ever implementation of the AI ethics course is about to start in two weeks, and although planning it is not such a huge effort now compared to last year when we were creating everything from scratch, there’s still a fair bit of work to do. The university pedagogy course I’ve been taking has given me a few new ideas to try – I hope I can get them to work the way I’m envisioning. We’ve again managed to recruit a great line-up of visiting experts, too, so on the whole I have a pretty good feeling about this. 

The choir has been operating more or less normally since the beginning of February, although last week we had to change some plans, once again because of COVID. A small group of singers, myself included, even got to do a gig at a private function, which was extremely refreshing. All of the big concerts we had planned for the spring term have been postponed, but instead we’re now rehearsing songs for a concert in May, the overarching theme of which happens to be death. When the choirmaster first told us about this idea, I found it quite amusing because of a rather dark inside joke running among some past and present colleagues of mine; it seems less funny now, but I really love the music, and hopefully by the date of the concert it won’t be quite so topical anymore. 

Words and music

The proceedings of Tethics 2021 are now available for your viewing pleasure at ceur-ws.org. This means that both of the papers I presented during my two-conference streak in October are now (finally!) officially published! Although I’ve mentioned the papers in my blog posts a few times, I don’t think I’ve really talked about what’s in them in any detail. Since they were published at more or less the same time, I thought I’d be efficient/lazy and deal with both of them in a single post. 

At Tethics I presented a paper titled “Teaching AI Ethics to Engineering Students: Reflections on Syllabus Design and Teaching Methods”, written by myself and Anna Rohunen, who teaches the AI ethics course with me. As the title suggests, we reflect in the paper on what we took away from the course, addressing the two big questions of what to teach when teaching AI ethics and how to teach it. In the literature you can find plenty of ideas on both but no consensus, and in a sense we’re not really helping matters since our main contribution is that we’re throwing a few more ideas into the mix. 

Perhaps the most important idea that we put forward in the paper is that the syllabus of a standalone AI ethics course should be balanced on two axes: the philosophy-technology axis and the practice-theory axis. The former means that it’s necessary to strike a balance between topics that furnish the students with ethical analysis and argumentation skills (the philosophy) and those that help them understand how ethics and values are relevant to the capabilities and applications of AI (the technology). The latter means that there should also be a balance between topics that are immediately applicable in the real world (the practice) and those that are harder to apply but more likely to remain relevant even as the world changes (the theory). 

The paper goes on to define four categories of course topics based on the four quadrants of a coordinate system formed by combining the two axes. In the philosophy/theory quadrant we have a category called Timeless Foundations, comprising ethics topics that remain relatively stable over time, such as metaethics and the theories of normative ethics. In the philosophy/practice quadrant, the Practical Guidance category consists of applied ethics topics that AI researchers and practitioners can use, such as computer ethics, data ethics and AI ethics principles. In the technology/practice quadrant, the Here and Now category covers topics related to AI today, such as the history and nature of AI and the ethical issues that the AI community is currently dealing with. Finally, the technology/theory quadrant forms the category Beyond the Horizon, comprising more futuristic AI topics such as artificial general intelligence and superintelligence. 

A way to apply this categorisation in practice is to collect possible course topics in each category, visualise them by drawing a figure with the two orthogonal axes and placing the topics in it, and drawing a bubble to represent the intended scope of the course. A reasonable way to start is a rough circle centered somewhere in the Here and Now quadrant, resulting in a practically oriented syllabus that you can stretch towards the corners of the figure if time allows and you want to include, say, a more comprehensive overview of general ethics. The paper discusses how you can use the overall shape of the bubble and the visualisation of affinities between topics to assess things such as whether the proposed syllabus is appropriately balanced and what additional topics you might consider including. 

On teaching practices the paper offers some observations on what worked well for us and what didn’t. Solidly in the former category is using applications that are controversial and/or close to the students’ everyday lives as case studies; this we found to be a good way to engage the students’ interest and to introduce them to philosophical concepts by showing how they manifest themselves in real-world uses of AI. The discussion on Zoom chat during a lecture dedicated to controversial AI applications was particularly lively, but alas, our other attempts at inspiring debates among the students were not so successful. Online teaching in general we found to be a bit of a double-edged sword: a classroom environment probably would have been better for the student interaction aspect, but on the other hand, with online lectures it was no hassle at all to include presentations, demos and tutorials by guest experts in the course programme. 

The other paper, titled “Ontology-based Framework for Integration of Time Series Data: Application in Predictive Analytics on Data Center Monitoring Metrics”, was written by myself and Jaakko Suutala and presented at KEOD 2021. The work was done in the ArctiqDC research project and came about as a spin-off of sorts, a sidetrack of an effort to develop machine learning models for forecasting and optimisation of data centre resource usage. I wasn’t the one working on the models, but I took care of the data engineering side of things, which wasn’t entirely trivial because the required data was kept in two different time series databases and for a limited time only, so the ML person needed an API that they could use to retrieve data from both databases in batches and store it locally to accumulate a dataset large enough to enable training of sufficiently accurate models. 

Initially, I wrote separate APIs for each database, with some shortcut functions for queries that were the most likely to be needed a lot, but after that I started thinking that a more generic solution might be a reasonably interesting research question in itself. What inspired this thought was the observation that while there’s no universal query language like SQL for time series databases, semantically speaking there isn’t much of a difference in how the query APIs of different databases work, so I saw here an opportunity to dust off the old ontology editor and use it to capture the essential semantics. Basically I ended up creating a query language where each query is represented by an individual of an ontology class and the data to be retrieved is specified by setting the properties of this individual. 

To implement the language, I wrote yet another Python API using a rather clever package called Owlready2. What I particularly like about it is that it treats ontology classes as Python classes and allows you to add methods to them, and this is used in the API to implement the logic of translating a semantic, system-independent representation of a query into the appropriate system-specific representation. The user of the API doesn’t need to be aware of the details: they just specify what data they want, and the API then determines which query processor should handle the query. The query processor outputs an object that can be sent to the REST API of the remote database as the payload of an HTTP request, and when the database server returns a response, the query processor again takes over, extracting the query result from the HTTP response and packaging it as an individual of another ontology class. 

Another thing I love besides ontologies is software frameworks with abstract classes that you can write your own implementations of, and sure enough, there’s an element of that here as well, as the API is designed so that it’s possible to add support for another database system without touching any of the existing code, by implementing an interface provided by the API. It’s hardly a universal solution – it’s still pretty closely bound to a specific application domain – but that’s something I can hopefully work on in the future. The ArctiqDC project was wrapped up in November, but the framework feels like it could be something to build on, not just a one-off thing. 

In other news, the choir I’m in is rehearsing Rachmaninoff’s All-Night Vigil together with two other local choirs for a concert in April. It’s an interesting new experience for me, in more than one way – not only was I previously unfamiliar with the piece, I had also never sung in Church Slavonic before! It turns out that the hours and hours I spent learning Russian in my school years are finally paying off, albeit in a fairly small way: the text has quite a few familiar words in it, I can read it more or less fluently without relying on the transliteration, and the pronunciation comes to me pretty naturally even though my ability to form coherent Russian sentences is almost completely gone by now. It’s still a challenge, of course, but also a beautiful piece of music, and I’m already looking forward to performing it in concert – assuming, of course, that we do get to go ahead with the performance. Because of tightened COVID restrictions, we won’t be able to start our regular spring term until February at the earliest, so I’m not taking anything for granted at this point… 

I’m an ethicist, get me out of here

Summer seems to have an impeccable timing this year: on Friday I came back from my vacation and immediately the temperature dropped by about ten degrees and it started raining. Certainly helped me feel less bad about spending the day indoors! Until then, July had been so consistently hot and sunny that it was almost enough to make you forget what a more typical Finnish summer looks like. Today in Oulu it’s +15°C and raining again, but the weather should get nicer toward the weekend, which is fortunate since I have some tickets booked for outdoor concerts. 

“Officially”, I was still on vacation all week last week – not that it makes much of a difference, since for now I’m still working from home; the university is currently not explicitly recommending remote work, but the city of Oulu is, and anyway all of my closest colleagues are still on vacation, so there doesn’t seem to be much point in going to the campus since I wouldn’t find anyone there to socialise with. Besides, given the most recent news about the development of the COVID situation, it may be best to wait until after the university’s response team has convened to see if there’s any update to the instructions currently in effect. 

The reason why I worked on Friday – I could get used to a one-day work week, by the way – is a happy one: a paper of mine got accepted to the 13th International Conference on Knowledge Engineering and Ontology Development, and the camera-ready version of the manuscript was due on July 30. The version submitted for review was ten pages long and was accepted as a short paper, which technically meant that the final version should have been two pages shorter, but I used the loophole of paying extra page charges and ended up adding a page so I could meaningfully address some of the reviewers’ suggestions. 

Already at the very beginning of my vacation I had received the pleasant news that another paper had been accepted to the Conference on Technology Ethics, so that’s a double whammy for the month of July! In fact, not only was the manuscript accepted – it received all “strong accept” ratings from the reviewers, which is surely a career first for me. What’s particularly exciting is that while all of the details are still TBA, it looks like the conference is going to be organised as an actual physical event in the city of Turku, which means that I may get to go on my first conference trip since 2019! I would certainly appreciate the opportunity to visit Turku, since it’s a city I’m way too unfamiliar with, having been there only once for a couple of days for work. 

I’m giving my next lecture on AI ethics already on Thursday, with two more to follow later in August, as part of a 10 ECTS set of courses in learning analytics. There seems to be no escaping the topic for me anymore, but I don’t exactly mind; it’s actually kind of cool that I’ve managed to carve myself a cosy little niche as a local go-to guy for things related to computing and ethics. Really the only problem is that I don’t always get to spend as much time thinking about ethics as I’d like to, since there are always other things vying for my attention. Generally those other things represent where the bulk of my salary is coming from, so then I feel guilty about neglecting them – but at the same time I’m increasingly feeling that the ethics stuff may be more significant in the long run than my contributions to more “profitable” areas of research.

Last spring term, during the AI ethics course, I was unhappy about it eating up so much of my time, and indeed for a while I barely had time for anything else. It didn’t help matters that the course kept spilling into what should have been my free time, but if you look at the big picture, you could say with some justification that it’s not the ethics eating up time from everything else but the other way around. Now I just need to find someone who’s willing to pay me a full salary for philosophising all day long…

The time is now, the day is here

This month of Maying is coming to an end on an unexpected positive note: I’m getting my first shot of COVID vaccine this weekend! Unexpected in that not too long ago it was still estimated that in my city and for my age group the vaccinations would start in the week starting on the 7th of June, so we got there a couple of weeks early. I’m not complaining of course, although I can’t help wondering what’s behind this surprise schedule speed-up – I certainly hope it’s not that the people in age brackets above mine have suddenly turned into conspiracy theorists. Pretty much everyone I know in my bracket rushed to make their reservations right away and then complained about how badly the reservation system was working, which I’m going to optimistically intepret as a sign of the system being under exceptionally heavy load (as opposed to just being rubbish). 

Another thing that’s coming to an end is the AI ethics course. Since the lectures were finished a few weeks ago, the work has consisted of grading assignments and doing miscellaneous admin – still a good deal of work, but it no longer feels like it’s hogging all of my available time and energy. It seems that many of the students have also found the course surprisingly laborious, so adjusting the workload could be something to consider in the future, but I guess a part of it may be that the students are not that used to the kind of work we had them do, with lots of writing assignments where they are expected to discuss non-engineery things like ethical principles and values. Presumably a more traditional course with an exam at the end would have been easier for both us and them, but to me that doesn’t seem like a very good way to teach a subject where, a lot of the time, there are no right answers. The time for proper stock-taking is later, but I feel like we were pretty successful in designing a course that challenges the students on their ability to build and defend arguments and not just on their ability to absorb information. 

It’s just as well that the course isn’t eating up all of my hours anymore, because there definitely isn’t any shortage of other things to do. It’s not even the only teaching thing I’m working on at the moment: there’s another course where I need to do some grading of exam answers, plus an upcoming one on learning analytics where I’m committed to giving some lectures on ethics, plus there are always students with Bachelor’s/Master’s theses to supervise. On top of that, I’m somehow finding some time for research – I’ve not just one but two manuscripts due to be submitted soon, which is a very welcome development after all of 2020 zoomed by without me getting a single new paper out. On top of that, a big funding proposal that had been dormant for a while is now very much awake again, and pressure is high to get it done before July comes and everyone buggers off to their summer hols. 

What happens after July is an interesting question. With the vaccinations progressing well – more than half of the adult population have had at least one jab already – it looks like there’s a good chance that the recommendation to work from home will be dropped and we’ll be going back to normal in August. The thing is, after close to a year and a half of working remotely, I’m not at all sure that going to the office is going to feel all that normal! I suppose we’ll get used to it, like we got used to the current situation, but it may take a while. There’s a lot to be said in favour of remote work, even when there isn’t a contagious disease to worry about, so I’m guessing there will be a period when everyone is figuring out the right balance between office days and remote days. In the end, perhaps work will be a bit better as a result of all this; I’m sure there are tons of academic papers to be written on the subject, but that’s a job for other people – I’ll stick with my diet of computer science and philosophy. 

The new black

The new AI ethics course is now officially underway – actually, we’re close to the halfway mark already, with three out of eight lectures done. I’ve been chiefly responsible for all three, which has kept me thoroughly busy for pretty much all of March, and I’ve seldom felt as deserving of the upcoming long weekend as I do right now. Zoom lecturing, which I had my first taste of in the autumn term, still feels weird but I’m getting used to it. Typically none of the students will have their camera on, and it’s hopeless to try to gauge how an audience of black rectangles is receiving you unless they go to the bother of using reactions. Perhaps a year of online classes hasn’t been enough time for a new culture of interaction to emerge organically – or perhaps this is the new culture, but that sounds kind of bleak to me and I hope it’s not true. 

I’m sure I could have done some things better to foster such a culture myself; I’m fully aware that I’m not the most interactive sort of teacher. On the other hand, I’m firmly of the opinion that teaching applied ethics without having any ethical debates would be missing the point, so we’ve been trying to come up with various ways to get the students sharing and discussing their views. We’ve had some success with supplementary sessions where a short presentation expanding on a minor topic of the main lecture seeds a discussion on related ethical issues, and there has also been some action on the Zoom chat, especially during last week’s lecture on controversial AI applications. It helps that there are many real-world controversies available for use as case studies: people will often have a gut reaction to these, and by analysing that it’s possible to gain some insight into ethics concepts and principles that might otherwise remain a bit abstract. 

Although the course has been a lot of work, some of it in evenings and weekends, it’s also been quite enjoyable, not counting the talking-at-laptop-camera-hoping-someone-is-listening part. Ethics isn’t exactly my bread and butter, so preparing materials for the course has required me to learn a little bit about a lot of different things, which suits me perfectly – I’m a bit of a junkie for knowledge in general, and I’ve never been one to focus all my efforts on a single interest. My eagerness to dabble in everything has probably worked to my disadvantage in research, since we’re way past the days when one person could be an expert in every field of scholarship, but I think it serves me well here. On the other hand, the mental stimulation I’ve been getting from looking into all these diverse topics has also given me all sorts of ideas for new papers I could write. The most laborious part of the course for me is over now, with my co-lecturer plus some guests taking over for most of the remaining lectures, so I may even have time and energy to actually work on those papers after I’ve had a bit of R&R.

In my latest lecture I talked about the relationship between AI and data. Here I was very much on home ground, since pretty much my whole academic career has revolved around this theme, so it wasn’t hard to come up with a number of fruitful angles to look at it from. I ended up using the ever-popular “new oil” metaphor for data quite a lot; I actually kind of hate it, but it turns out that talking about the various ways in which data is or isn’t similar to oil makes a pretty nifty framing device for a lecture on data ethics. Data is like oil in that it’s a highly valuable resource in today’s economy, it powers a great many (figurative) engines, and it needs to be refined in order to be of any real value. On the other hand, data is not some naturally occurring resource that you pump or dig out of the ground: it’s created by people, and often it’s also about people and/or used to make decisions that affect people, which is where data ethics comes in. 

None of these are very original observations I’m afraid, but perhaps it’s good to say them out loud all the same. If I do have a more novel contribution to add, it might be this: both oil and data have generated a lot of wealth, but over time we have come to regret using them so carelessly. With oil, we are working to reduce our dependence by adopting alternatives to petroleum-based energy sources and materials, but with data, I’m not sure that the idea of an alternative even makes sense, so it looks like we’re slated to keep using more and more of it. This makes it ever more important that we all learn to deal with it wisely – individuals, enterprises and governments alike. The economic value of data is well established by now, so maybe it’s time to pay more attention to other values? 

Happy(?) anniversary

Two weeks ago I celebrated the one-year anniversary of my return to Finland. Well, I didn’t actually celebrate as such – it was a Tuesday like any other. Looking back to that day in 2020, I can’t help but find the contrast of expectation versus reality slightly amusing; I’d decided to travel home in style and booked a business-class ticket, so there I was, lounging in my comfy seat with a pleasant warmth spreading inside me from a nice hot breakfast, complimentary champagne, memories of Ireland and thoughts of all the good things ahead now that I was coming home for good. Little did I know! 

I don’t know how many people would agree with me on this, but considering how quickly this first full year back in Finland has zoomed by (no online meetings pun intended), I have to conclude that time does actually fly even under the present circumstances. Finland, of course, has had it a good deal easier than a lot of other countries, and the summer was even verging on normal, although I did have to cancel my planned trip to the UK and I’m not hugely optimistic about the chances of it happening this year either. The end of the year, I’ll admit, was a bit rough, but then, it tends to be wearying even in the best of times so I can’t blame it all on the pandemic. 

There was something satisfyingly symbolic about the way the year changed. I spent New Year’s Eve at home, accompanied by my pet rabbit, entertaining myself by watching a Jean-Michel Jarre concert that was virtual in more than one sense: besides being an online-only event, the video stream didn’t even show Jarre performing in a physical location but rather an avatar of him in a VR environment based on Notre-Dame de Paris. (Another Ireland memory there – one of the songs I rehearsed with the DCU Campus Choir was a short tribute piece written by an Irish composer after the April 2019 fire.) The weather, having been kind of iffy all December, took a wintry turn during the night and it began to snow heavily, as if to wipe the slate clean for the coming year. By noon the following day the world had turned so gloriously white that I felt compelled to go out on my bike and take some pictures. 

For some reason – well, for a number of reasons I suppose – I’ve found it quite hard to get any kind of writing done in the past couple of months. I wanted to do some work on my rejected manuscript during the Christmas break, but I struggled to find the motivation and finally got it submitted to another journal just a couple of weeks ago. Last week I finished my share of the work for the latest run of our Towards Data Mining course, so with those two major items ticked off my to-do list and the kick-off of the new AI ethics course still a month away, I felt justified to turn my attention to the blog, which I’ve been neglecting (again). 

Ah yes, the ethics course. I say “still a month away”, but in reality I’m already getting stressed about it. It’s coming up pretty well, but it’s still far from ready for launch, and I keep worrying that it’s going to fail spectacularly because of some rookie mistake. Feeling nervous about lecturing is one thing, but there’s a lot more to prepare than just an individual lecture or two. On top of that it’s all being created more or less from scratch, and this whole online teaching thing is also still kind of new and in the process of taking shape, so there are dozens of critically important things that we might get all wrong or just completely forget to do – in my mind at least, if not necessarily in reality. 

I am very much enjoying preparing my lectures, though. Perhaps the biggest problem with the subject matter is that as much as I love philosophy, it can be a bit of a rabbit hole: once you get started with questioning your assumptions, and the assumptions behind those assumptions, you’ll soon find yourself questioning everything you believe in, which isn’t a great place to be when you’re supposed to be confidently imparting knowledge to others. On an applied ethics course it wouldn’t make sense to spend a lot of time exploring ethical theories that are of little relevance to the sort of issues the students can expect to encounter in the real world – and I wouldn’t be qualified to teach those anyway – but it also wouldn’t seem right to just handwave all the theory away and discuss the issues on an ad-hoc basis. 

What’s needed here is a framework that makes it possible to make meaningful normative statements and have a productive debate about them without taking forever to set up. As I was thinking about this recently, I was struck by the realisation that it’s actually pretty amazing that we are, in fact, able to have meaningful discussions about ethics, considering that there are some very fundamental things about it that we can’t agree on. Put two random people together and they may hold radically different views on the foundations of ethics, yet the odds are that each of them uses ethical concepts in a way that’s perfectly recognisable to the other. Theoretically, you could argue that ethical statements are completely subjective or even essentially meaningless, but it’s hard to sustain such arguments when you look at how well, in reality, we are able to understand each other on matters of right and wrong. 

Similarly, if you immerse yourself too deeply in metaethical nitpicking, it’s easy to lose sight of the fact that despite all our differences and disagreements, ethics works. It may seem outright heretical to view ethics as an instrument, but if you do that, you have to conclude that it does a really good job of enabling people to live together as functional communities. It’s hardly a perfect system, and there will always be some unwanted things slipping through the cracks, but that doesn’t make the system useless, or meaningless, or nonexistent. Like many of the more abstract systems that human societies are built upon, it ultimately depends on enough people believing in it, but on the whole, we as a species seem to be pretty good at believing in such things. 

Another thing we’re good at is developing technology, and that’s what makes technology ethics – including AI ethics – so important in my view. We do, of course, have laws to regulate technology and we keep making new ones, but the process of legislation tends to lag behind the process of technological change, and the social change that comes with it. As a technology researcher I believe that technology is primarily a force for good, but we need a frontline defence against harmful excesses, something capable of pre-empting them rather than just reacting to them: a strong ethical tradition involving all developers and appliers. If I can do my modest part in cultivating such a tradition among future AI engineers, then the new course will be something to feel at least a little bit proud of.