Still alive

I am indeed! Barely, but still. Once again blogging has been forced to take a back seat, but I thought I should do one more post before my vacation – which, happily, is right around the corner. No big deadlines before that, just some exam marking plus a bunch of writing that I can pick up from where I left off when I come back to work in August. Next week will be more like a half week because of the faculty’s staff summer party and the Midsummer weekend, and after that there’s just one week of work left before I’m free. Seems too good to be true! 

The AI ethics course is happily finished by now: lectures given, assignments evaluated, grades entered into Peppi. Again, it was a lot of work, but also rewarding and enjoyable. There are always at least a couple of students who really shine, turning in excellent assignment submission after another, and those alone are enough to make it all worthwhile. However, a big part of the enjoyment is also that I can use the course as a test lab of sorts, changing things a bit and trying something new each time, seeing what works and what doesn’t. This time I made some changes to the assessment criteria and practices, which seemed to work, so I think I’ll continue in the same direction next year with the teaching development project that I need to do as part of my university pedagogy studies. 

Of course, there’s always new things happening in the world of AI, so the course contents also need some updating each year. This spring, for obvious reasons, the ethical implications of generative AI tools kept popping up under various course themes, and I also encouraged the students to try ChatGPT or some other such tool at least once to generate text for their assignment submissions. There were certain rules, of course: I told the students that they must document their use of AI, critically examine the AI outputs and take responsibility for everything they submit, including any factual errors or other flaws in AI-generated text. The results of the experiment were a bit of a mixed bag, but at any rate there were some lessons learned, for myself and hopefully for the students as well. If you won’t trust students to use AI ethically on an AI ethics course, where then? 

The most recent big news related to AI ethics is that the European Parliament voted this week to adopt its position on the upcoming AI Act, so the regulation is moving forward and it may well be that on next year’s course we will be able to tell the students what it looks like in its final form. The parliament appears to have made some substantial changes to the bill, expanding the lists of prohibited and high-risk applications and specifying obligations for general-purpose AI systems while making exemptions for R&D so as not to stifle innovation. It will be extremely interesting to see what the impact of the act will be – on AI development and use, of course, but also on AI regulation elsewhere in the world, since this is very much a pioneering effort globally. 

After my summer holiday I’ll need to hit the ground running, because I’m once again giving some AI ethics lectures as part of a learning analytics summer school. A new thing this year is that I’m also preparing an ethics module for a new Master’s programme in sustainable autonomous systems, a collaboration between my university and the University of Vaasa. I don’t mind the new challenge at all – I took it upon myself more or less voluntarily, after all – but it does mean that my job title is increasingly at odds with what I actually do. Still, I’ve managed to fit in some research as well, and starting in the autumn I’ll even be participating in a proper research project for a change.

One of the highlights of the spring is that I got a paper accepted to Tethics 2023 – or rather, I supervised a student who got a paper accepted, which feels at least as rewarding as if I’d done the research myself, if not more so. In any case, it looks like I’ll be visiting Turku for an ethics conference for the third year running, and I really wouldn’t mind if this became a tradition! I’m even looking forward to the networking aspect, which I’m usually pretty bad at. Somehow ethics conference are different and Tethics especially – partially because it’s so small, I suppose, but perhaps also because these people are my tribe? 

Musically, the spring term was very successful. After The Magic Flute we appeared in two concerts with Oulu Sinfonia – one of them sold out – performing music by the late great Ennio Morricone. Sadly, we then parted ways with our musical director of many years, which forced some planned events to be cancelled / postponed / scaled down, but everyone seems determined to keep the motor running and overall I feel pretty good about the future of the choir. There will be some big things happening late this year and early the next, including (but not limited to) another run of the opera in January and February. Three out of eleven shows are sold out already, so if you missed it this year, get your ticket now! 

The final curtain

Happy 2023, I guess? I know it’s a bit ridiculous to be wishing that when we’re more than halfway into February already, but it is my first blog post of the year – I checked. In my defence, the beginning of the year has been pretty much exactly as intense as I feared it would be, with me trying my best to balance between my commitments to the university and the theatre. The first week of January was the absolute worst: I returned to work immediately after New Year, and that week we had rehearsals every night from Monday to Thursday. I was still suffering from the problem of sleeping badly after them, so the inevitable result was me being utterly knackered by Friday, which fortunately was a bank holiday, giving me a chance to recover before two more rehearsals on Saturday.

The following week we had dress rehearsals from Monday to Wednesday, Thursday night off and then the first two performances on Friday and Saturday. In terms of effort, it was hardly any easier than the previous week, but the thrill of the opening night more than made up for it all. After the first show we celebrated with some bubbly and they even gave flowers to all of us chorus members; sadly, mine suffered rather heavy damage on the way home, which involved a pit stop in a crowded bar that I ended up leaving before I even had a chance to order myself a drink, but I was able to salvage the essential part of the poor abused plant and keep it looking nice for a good week.

After opening week, things got considerably less hectic, since there were no more rehearsals, just performances – first three per week, then down to two for the last couple of weeks. This weekend’s the final one, so around 4pm on Saturday the curtain will close on our production of The Magic Flute for the last time. All 15 performances sold out, and all the reviews I’ve seen have been very positive, so I guess it’s safe to say we’ve had a successful run! It’s been a wonderful experience for me personally as well, but I can’t deny that toward the end it has begun to feel more and more like work that I’m not getting paid for and that has made me put my other hobbies (not to mention my social life) largely on hold for quite a while. I’m very much looking forward to next Friday and my first commitment-free weekend of the year.

The big thing at work right now is evaluating applications to international M.Sc. degree programmes. This is the first time I’m involved in the process, and boy is it a trudge and a half. Sure, it’s interesting to get a sneak peek at some of the new students who may be joining us from around the world next autumn, but the work itself is first tedious, crawling through the mass of application documents to identify the most promising candidates, and then stress-inducing, doing interviews with each of them. I recently had a chat about this with a friend of mine who’s been in the IT consulting business for many years and interviewed his share of job applicants, and he said he finds interviews stressful because he can tell that the other person is nervous, so then he empathises with them and starts to feel their discomfort. Me being me, I get stressed about talking to new people even without that extra factor, so I’m going to be extremely glad once I’m done with my share of the interviews.

Something that’s turned out to be a blessing here is the Bookings app in Microsoft 365. This has been very helpful in scheduling the interviews: you just specify the times when you are available, make sure your calendar is up to date with your other appointments so you don’t get double bookings, and then send a link to the booking page to the people you want to invite and let them pick a time that works for them. Apparently in the past this has been done by tentatively selecting a date and time for each candidate, emailing it to them and asking them to email back with suggestions if the proposed time doesn’t suit them; I certainly don’t relish the idea of having that kind of administrative overhead on top of the actual evaluation work, even though it might have helped get the interviews spaced out more evenly and efficiently.

As usual, there’s no need to worry about running out of work to do in the spring either: the start of period IV is just three full weeks away, and with that comes the start of another run of the AI ethics course. I’ll count myself lucky if it doesn’t take up even more of my time than before; I’m the sole responsible teacher now, but on the other hand I will have a teaching assistant, and I also have some ideas for streamlining the evaluation of course assignments to make it less of a burden. Another thing to think about is my stance on ChatGPT and its ilk; certainly I’m going to discuss the technology and its implications in my lectures, but I’ll also need to decide what to do about the possibility of students using it to generate text for their assignment submissions. I’m leaning toward embracing it rather than discouraging or outright banning it – I don’t know how I’d enforce such a ban anyway – but if I go there, it’s not exactly trivial to come up with assignments that give everyone an equal opportunity to exploit the technology and demonstrate their learning to me.

“It belongs in a museum!”

After a three-week summer holiday, I returned to work last Monday. I say “returned to work”, but what I actually did was hop on a train and travel to Turku to attend the Ethicomp 2022 conference at the School of Economics. After two and a half days of hard conferencing, I departed for Oulu on Thursday afternoon, leaving only Friday as a “normal” workday before the weekend. I can imagine, and have in fact experienced, much worse ways to come back after a vacation! 

I felt more anxious than usual about my own presentation, scheduled for late afternoon on the first day. This was partially because I like to prepare and rehearse my presentations well in advance, but this time I hadn’t had time to finish my slides before my vacation nor an inclination to work on them during it, so I more or less put my deck together on the train and then rehearsed the talk in my hotel room. On Tuesday I skipped the session immediately before mine to flick through my slides a few more times and make some last-minute tweaks, and I eventually emerged from my mental cocoon reasonably confident that I would get through the whole thing without stumbling. 

I still wasn’t that confident about how the presentation would be received, because the paper I was presenting is probably the strangest one I’ve written to date. Long story short, one day I was preparing materials for the introductory lecture of the AI ethics course and explaining the concepts of moral agency (the status of having moral obligations) and patiency (the status of being the subject of moral concerns). Artificial things are traditionally excluded from both categories, but there is an ongoing debate in philosophy of AI about whether a sufficiently advanced AI system could qualify as a moral agent and/or patient. 

The idea that struck me was that if we let go of (organic) life as an analogy and view AI systems as cultural artifacts instead, we can sidestep the whole debate on whether AI can become sentient/conscious/whatever and make the moral patiency question a good deal more relevant to practical AI ethics in the here and now. After all, many people feel sad when an artifact of great cultural significance is destroyed (think Notre-Dame de Paris), and downright outraged if the destruction is wilful (think the Buddhas of Bamiyan), so it doesn’t seem too much of a stretch to argue that such artifacts have at least something closely related to moral patiency. Could an AI system also qualify as such an artifact? I filed the question in my brain under “ideas to come back to at an opportune moment”. 

The moment came in January: I wasn’t terribly busy with anything else right after the holidays, Ethicomp had a call for papers open and I only needed to write a 1500-word extended abstract to pitch my idea. I did wonder if it might be a bit too outlandish, which in retrospect was silly of me, I suppose – philosophers love outlandish ideas! The reviews were in fact fairly enthusiastic, and in the end my presentation at the conference was also well received. I was able to have some fun with it even, which is not something I often manage with my conference talks, and I soon got over my nagging feeling of being an impostor, a lowly computer scientist who arrogantly thinks he’s qualified to talk philosophy. 

In retrospect, I also have to say I did manage to turn that extended abstract into a pretty well written full paper! It’s not officially published yet, but it argues that 1) yes, AI systems can be artifacts of considerable cultural significance and therefore intrinsically worthy of preservation, 2) they constitute a category of artifact that cannot be subsumed under a broader category without losing essential information about their special nature, and 3) this special nature should be taken into account when deciding how to preserve them. The argumentation is fairly informal, relying largely on intuition and analogy, but I’m quite proud of the way it’s built and presented nonetheless. Sure, the paper is only tangentially related to my daily work and is likely to be a total one-off, but even the one-offs can sometimes have a bigger impact than you’d expect – there’s another one of mine, also an ethics paper, that was published 15 years ago but is still getting citations. 

Apart from surviving my own presentation, for me the highlight of the first day, and indeed the whole conference, was the keynote Scaling Responsible Innovation by Johnny Søraker. I’d met Johnny before on a couple of occasions, originally at the ECAP 2006 conference in Trondheim where he was one of the organisers, but hadn’t seen him for ages. Turns out he’s now working as an AI ethicist for Google, which the more cynically minded among us might remark sounds like a contradiction in terms, but be that as it may, he gave an insightful and entertaining talk on the challenges faced by SMEs wanting to do responsible innovation and how they can address those challenges. I particularly liked the idea of having an “interrupt”: someone who is kept informed of everything going on in the company and has been trained to spot potential ethics issues. The obvious advantage is that it doesn’t matter how convoluted or ad-hoc the innovation process is – as long as there is this one node through which everything passes at some point, risks can be identified at that point and brought to the attention of someone qualified to make decisions on how to mitigate them. 

Among the regular presentations there were several AI-related ones that I found very interesting. The one that resonated with me the most was Sara Blanco’s talk, in which she criticised what might be called a naive, “one-size-fits-all” conception of AI explainability and argued for a more nuanced one that acknowledges the need to account for differences in background knowledge and prior beliefs in the formulation of explanations. In light of my recent exposure to constructivist theories of learning, which likewise emphasise the effect of the learner’s existing knowledge structures on the process of integrating new knowledge into those structures, this made a great deal of sense to me. Outside the realm of AI, I very much enjoyed Reuben Kirkham’s talk on the impact on academic freedom of the unusual relationship between academia and industry in computer science, as well as Michael Kirkpatrick’s on the problematic nature of direct-to-consumer genomic testing services such as 23andMe, something I’ve brought up myself in my data ethics lectures. 

The social programme was top notch too. On Wednesday evening we were first treated to a glass of sparkling and some live classical music at the Sibelius Museum, where we had about an hour to roam and explore the collections, which even included some instruments for visitors to try out – I couldn’t resist having a go on the Hammond organ, of course. After this we enjoyed a very tasty three-course dinner, with more live music, at restaurant Grädda next door. From the restaurant we proceeded to a pub for more drinks and chats, and when the pub closed, some of my fellow delegates went to find another one to have a nightcap in, but by that point I was quite ready for bed myself so I headed straight to my hotel. 

This was my first Ethicomp conference, but I certainly hope it wasn’t my last. I’ve always found philosophy conferences highly stimulating, as well as welcoming to people of diverse academic backgrounds, so despite my anxieties, me not being a “proper” philosopher has never been a real issue. After CEPE 2009 I more or less lost touch with the tech ethics community for a whole decade, but recently I’ve been sort of working my way back in: first there was the special session at IEEE CEC 2019, then Tethics 2021, and now this. Ethicomp in particular is apparently the one that everyone in the ethics of computing community wants to go to, and having now been there myself, I can see why. The next one will be in 2024, so I guess I have about a year and a half to come up with another weird-but-compelling idea? 

That’s a wrap, folks

A paper I wrote with Alan Smeaton, titled “Privacy-aware sharing and collaborative analysis of personal wellness data: Process model, domain ontology, software system and user trial”, is now published in PLOS ONE. In all likelihood, this will be the last scientific publication to come out of the results of my MSCA fellowship in Dublin, so I’m going to take the risk of sounding overly dramatic and say it kind of feels like the end of an era. It took a while to get the thing published, but with all the more reason it feels good to be finally able to put a bow on that project and move on to other things.

So what’s next? More papers, of course – always more papers. As a matter of fact, the same week that I got the notification of acceptance for the PLOS ONE paper, I also got one for my submission to Ethicomp 2022. As seems to be the procedure in many ethics conferences, the paper was accepted based on an extended abstract and the full paper won’t be peer-reviewed, so as a research merit, this isn’t exactly in the same league as a refereed journal paper. However, since the conference is in Finland, I figured that the expenditure would be justifiable and decided to take this opportunity to pitch an idea I’d been toying with in my head for some time. 

To be quite honest, this was probably the only way I was ever going to write a paper on that idea, since what I have right now is just that: an idea, not the outcome of a serious research effort but simply something I thought might spark an interesting discussion. Since I only needed to write an extended abstract for review purposes, I could propose the idea without a big initial investment of time and effort, so it wouldn’t have been a huge loss if the reviewers had rejected it as altogether too silly, which I was half expecting to happen. However, the reviewers turned out to agree that the idea would be worth discussing, so Turku, here I come again! That’s the beauty of philosophy conferences  in my experience – they’re genuinely a forum for discussion, and I’ve never felt excluded despite being more of a computer scientist/engineer myself, which I presume has a lot to do with the fact that philosophers love to get fresh perspectives on things. 

The idea itself is basically an out-of-the-box take on the notion of moral patiency of AI systems, and I will talk about it in more detail in another post, probably after the conference. Meanwhile, a follow-up to our Tethics 2021 paper on teaching AI ethics is at the planning stage, and I have the idea for yet another AI ethics paper brewing in my head. Since I returned to Finland and especially since I started working on the AI ethics course, I’ve been trying to raise my profile in this area, and I have to say I’m fairly pleased at how this is turning out. Recently I had a preliminary discussion with my supervisor about applying for a Title of Docent with AI and data ethics as my field of specialisation, although I haven’t actually started preparing my application yet. 

The AI ethics course is now past the halfway point in terms of lecturing, and my own lectures are all done. I started this year’s course with my head full of new ideas from the university pedagogy course I recently completed, and some of them I’ve been able to put to good use, while others have not been so successful. I’ve been trying to encourage the students to participate more during lectures instead of just passively listening, and low-threshold activities such as quick polls seem to work pretty well, but my grand idea of devoting an entire teaching session to a formal debate met with a disappointing response. I don’t very much like the idea of forcing the students to do things they’re not motivated to do or don’t feel comfortable with, but I also don’t have a magic trick for enticing the students out of their comfort zone, so I’m not sure what to do here. I suppose I could settle for the small victories I did manage to win, but I still think that the students would really benefit from an exercise where they have to interact with one another and possibly adopt a position they don’t agree with. Oh well, I have another year now to come up with new ideas for them to shoot down. 

Meanwhile, in the choir things are getting fairly intense, with three rehearsal weekends over the past four weeks, two for the whole choir and one for just the tenor section – although to be quite honest, during the latter we sang a grand total of one of the songs included in the set of the spring concert. We also have performances coming up on May Day and in the university’s Doctoral Conferment Ceremonies on the 28th of May, so there’s a lot of material to go through over the next month and a half. Immediately after the March reheasal weekend I tested positive in a COVID home test, so the dreaded bug finally caught up with me, something I’d been expecting for a while actually. It was a mild case, but still unpleasant enough that I wouldn’t fancy finding out what sort of experience it would be without the vaccine. 

While on the subject of music, I can’t resist mentioning that I signed up to sing in the chorus in a production of The Magic Flute in January-February next year! That’s a first for me – I’ve been in the audience for plenty of operas, but never on the stage. I’m slightly dreading the amount of time and effort this will require, but in the end I just couldn’t pass up the opportunity. There is still the caveat that if there are more people eager to sing than there are open positions, we may have to audition, but an oversupply of tenors is not a problem that frequently occurs in the choral world. The rehearsal period won’t start until much later in the year, but I’m already a little bit excited at the prospect! 

I’m an ethicist, get me out of here

Summer seems to have an impeccable timing this year: on Friday I came back from my vacation and immediately the temperature dropped by about ten degrees and it started raining. Certainly helped me feel less bad about spending the day indoors! Until then, July had been so consistently hot and sunny that it was almost enough to make you forget what a more typical Finnish summer looks like. Today in Oulu it’s +15°C and raining again, but the weather should get nicer toward the weekend, which is fortunate since I have some tickets booked for outdoor concerts. 

“Officially”, I was still on vacation all week last week – not that it makes much of a difference, since for now I’m still working from home; the university is currently not explicitly recommending remote work, but the city of Oulu is, and anyway all of my closest colleagues are still on vacation, so there doesn’t seem to be much point in going to the campus since I wouldn’t find anyone there to socialise with. Besides, given the most recent news about the development of the COVID situation, it may be best to wait until after the university’s response team has convened to see if there’s any update to the instructions currently in effect. 

The reason why I worked on Friday – I could get used to a one-day work week, by the way – is a happy one: a paper of mine got accepted to the 13th International Conference on Knowledge Engineering and Ontology Development, and the camera-ready version of the manuscript was due on July 30. The version submitted for review was ten pages long and was accepted as a short paper, which technically meant that the final version should have been two pages shorter, but I used the loophole of paying extra page charges and ended up adding a page so I could meaningfully address some of the reviewers’ suggestions. 

Already at the very beginning of my vacation I had received the pleasant news that another paper had been accepted to the Conference on Technology Ethics, so that’s a double whammy for the month of July! In fact, not only was the manuscript accepted – it received all “strong accept” ratings from the reviewers, which is surely a career first for me. What’s particularly exciting is that while all of the details are still TBA, it looks like the conference is going to be organised as an actual physical event in the city of Turku, which means that I may get to go on my first conference trip since 2019! I would certainly appreciate the opportunity to visit Turku, since it’s a city I’m way too unfamiliar with, having been there only once for a couple of days for work. 

I’m giving my next lecture on AI ethics already on Thursday, with two more to follow later in August, as part of a 10 ECTS set of courses in learning analytics. There seems to be no escaping the topic for me anymore, but I don’t exactly mind; it’s actually kind of cool that I’ve managed to carve myself a cosy little niche as a local go-to guy for things related to computing and ethics. Really the only problem is that I don’t always get to spend as much time thinking about ethics as I’d like to, since there are always other things vying for my attention. Generally those other things represent where the bulk of my salary is coming from, so then I feel guilty about neglecting them – but at the same time I’m increasingly feeling that the ethics stuff may be more significant in the long run than my contributions to more “profitable” areas of research.

Last spring term, during the AI ethics course, I was unhappy about it eating up so much of my time, and indeed for a while I barely had time for anything else. It didn’t help matters that the course kept spilling into what should have been my free time, but if you look at the big picture, you could say with some justification that it’s not the ethics eating up time from everything else but the other way around. Now I just need to find someone who’s willing to pay me a full salary for philosophising all day long…

The new black

The new AI ethics course is now officially underway – actually, we’re close to the halfway mark already, with three out of eight lectures done. I’ve been chiefly responsible for all three, which has kept me thoroughly busy for pretty much all of March, and I’ve seldom felt as deserving of the upcoming long weekend as I do right now. Zoom lecturing, which I had my first taste of in the autumn term, still feels weird but I’m getting used to it. Typically none of the students will have their camera on, and it’s hopeless to try to gauge how an audience of black rectangles is receiving you unless they go to the bother of using reactions. Perhaps a year of online classes hasn’t been enough time for a new culture of interaction to emerge organically – or perhaps this is the new culture, but that sounds kind of bleak to me and I hope it’s not true. 

I’m sure I could have done some things better to foster such a culture myself; I’m fully aware that I’m not the most interactive sort of teacher. On the other hand, I’m firmly of the opinion that teaching applied ethics without having any ethical debates would be missing the point, so we’ve been trying to come up with various ways to get the students sharing and discussing their views. We’ve had some success with supplementary sessions where a short presentation expanding on a minor topic of the main lecture seeds a discussion on related ethical issues, and there has also been some action on the Zoom chat, especially during last week’s lecture on controversial AI applications. It helps that there are many real-world controversies available for use as case studies: people will often have a gut reaction to these, and by analysing that it’s possible to gain some insight into ethics concepts and principles that might otherwise remain a bit abstract. 

Although the course has been a lot of work, some of it in evenings and weekends, it’s also been quite enjoyable, not counting the talking-at-laptop-camera-hoping-someone-is-listening part. Ethics isn’t exactly my bread and butter, so preparing materials for the course has required me to learn a little bit about a lot of different things, which suits me perfectly – I’m a bit of a junkie for knowledge in general, and I’ve never been one to focus all my efforts on a single interest. My eagerness to dabble in everything has probably worked to my disadvantage in research, since we’re way past the days when one person could be an expert in every field of scholarship, but I think it serves me well here. On the other hand, the mental stimulation I’ve been getting from looking into all these diverse topics has also given me all sorts of ideas for new papers I could write. The most laborious part of the course for me is over now, with my co-lecturer plus some guests taking over for most of the remaining lectures, so I may even have time and energy to actually work on those papers after I’ve had a bit of R&R.

In my latest lecture I talked about the relationship between AI and data. Here I was very much on home ground, since pretty much my whole academic career has revolved around this theme, so it wasn’t hard to come up with a number of fruitful angles to look at it from. I ended up using the ever-popular “new oil” metaphor for data quite a lot; I actually kind of hate it, but it turns out that talking about the various ways in which data is or isn’t similar to oil makes a pretty nifty framing device for a lecture on data ethics. Data is like oil in that it’s a highly valuable resource in today’s economy, it powers a great many (figurative) engines, and it needs to be refined in order to be of any real value. On the other hand, data is not some naturally occurring resource that you pump or dig out of the ground: it’s created by people, and often it’s also about people and/or used to make decisions that affect people, which is where data ethics comes in. 

None of these are very original observations I’m afraid, but perhaps it’s good to say them out loud all the same. If I do have a more novel contribution to add, it might be this: both oil and data have generated a lot of wealth, but over time we have come to regret using them so carelessly. With oil, we are working to reduce our dependence by adopting alternatives to petroleum-based energy sources and materials, but with data, I’m not sure that the idea of an alternative even makes sense, so it looks like we’re slated to keep using more and more of it. This makes it ever more important that we all learn to deal with it wisely – individuals, enterprises and governments alike. The economic value of data is well established by now, so maybe it’s time to pay more attention to other values? 

Happy(?) anniversary

Two weeks ago I celebrated the one-year anniversary of my return to Finland. Well, I didn’t actually celebrate as such – it was a Tuesday like any other. Looking back to that day in 2020, I can’t help but find the contrast of expectation versus reality slightly amusing; I’d decided to travel home in style and booked a business-class ticket, so there I was, lounging in my comfy seat with a pleasant warmth spreading inside me from a nice hot breakfast, complimentary champagne, memories of Ireland and thoughts of all the good things ahead now that I was coming home for good. Little did I know! 

I don’t know how many people would agree with me on this, but considering how quickly this first full year back in Finland has zoomed by (no online meetings pun intended), I have to conclude that time does actually fly even under the present circumstances. Finland, of course, has had it a good deal easier than a lot of other countries, and the summer was even verging on normal, although I did have to cancel my planned trip to the UK and I’m not hugely optimistic about the chances of it happening this year either. The end of the year, I’ll admit, was a bit rough, but then, it tends to be wearying even in the best of times so I can’t blame it all on the pandemic. 

There was something satisfyingly symbolic about the way the year changed. I spent New Year’s Eve at home, accompanied by my pet rabbit, entertaining myself by watching a Jean-Michel Jarre concert that was virtual in more than one sense: besides being an online-only event, the video stream didn’t even show Jarre performing in a physical location but rather an avatar of him in a VR environment based on Notre-Dame de Paris. (Another Ireland memory there – one of the songs I rehearsed with the DCU Campus Choir was a short tribute piece written by an Irish composer after the April 2019 fire.) The weather, having been kind of iffy all December, took a wintry turn during the night and it began to snow heavily, as if to wipe the slate clean for the coming year. By noon the following day the world had turned so gloriously white that I felt compelled to go out on my bike and take some pictures. 

For some reason – well, for a number of reasons I suppose – I’ve found it quite hard to get any kind of writing done in the past couple of months. I wanted to do some work on my rejected manuscript during the Christmas break, but I struggled to find the motivation and finally got it submitted to another journal just a couple of weeks ago. Last week I finished my share of the work for the latest run of our Towards Data Mining course, so with those two major items ticked off my to-do list and the kick-off of the new AI ethics course still a month away, I felt justified to turn my attention to the blog, which I’ve been neglecting (again). 

Ah yes, the ethics course. I say “still a month away”, but in reality I’m already getting stressed about it. It’s coming up pretty well, but it’s still far from ready for launch, and I keep worrying that it’s going to fail spectacularly because of some rookie mistake. Feeling nervous about lecturing is one thing, but there’s a lot more to prepare than just an individual lecture or two. On top of that it’s all being created more or less from scratch, and this whole online teaching thing is also still kind of new and in the process of taking shape, so there are dozens of critically important things that we might get all wrong or just completely forget to do – in my mind at least, if not necessarily in reality. 

I am very much enjoying preparing my lectures, though. Perhaps the biggest problem with the subject matter is that as much as I love philosophy, it can be a bit of a rabbit hole: once you get started with questioning your assumptions, and the assumptions behind those assumptions, you’ll soon find yourself questioning everything you believe in, which isn’t a great place to be when you’re supposed to be confidently imparting knowledge to others. On an applied ethics course it wouldn’t make sense to spend a lot of time exploring ethical theories that are of little relevance to the sort of issues the students can expect to encounter in the real world – and I wouldn’t be qualified to teach those anyway – but it also wouldn’t seem right to just handwave all the theory away and discuss the issues on an ad-hoc basis. 

What’s needed here is a framework that makes it possible to make meaningful normative statements and have a productive debate about them without taking forever to set up. As I was thinking about this recently, I was struck by the realisation that it’s actually pretty amazing that we are, in fact, able to have meaningful discussions about ethics, considering that there are some very fundamental things about it that we can’t agree on. Put two random people together and they may hold radically different views on the foundations of ethics, yet the odds are that each of them uses ethical concepts in a way that’s perfectly recognisable to the other. Theoretically, you could argue that ethical statements are completely subjective or even essentially meaningless, but it’s hard to sustain such arguments when you look at how well, in reality, we are able to understand each other on matters of right and wrong. 

Similarly, if you immerse yourself too deeply in metaethical nitpicking, it’s easy to lose sight of the fact that despite all our differences and disagreements, ethics works. It may seem outright heretical to view ethics as an instrument, but if you do that, you have to conclude that it does a really good job of enabling people to live together as functional communities. It’s hardly a perfect system, and there will always be some unwanted things slipping through the cracks, but that doesn’t make the system useless, or meaningless, or nonexistent. Like many of the more abstract systems that human societies are built upon, it ultimately depends on enough people believing in it, but on the whole, we as a species seem to be pretty good at believing in such things. 

Another thing we’re good at is developing technology, and that’s what makes technology ethics – including AI ethics – so important in my view. We do, of course, have laws to regulate technology and we keep making new ones, but the process of legislation tends to lag behind the process of technological change, and the social change that comes with it. As a technology researcher I believe that technology is primarily a force for good, but we need a frontline defence against harmful excesses, something capable of pre-empting them rather than just reacting to them: a strong ethical tradition involving all developers and appliers. If I can do my modest part in cultivating such a tradition among future AI engineers, then the new course will be something to feel at least a little bit proud of.

Summing up the AI summit

The end of the year is approaching fast, with Christmas now barely two weeks away, but I managed to fit in one more virtual event to top off this year of virtual events: the Tortoise Global AI Summit. To be quite honest, I wasn’t actually planning to attend – didn’t even know it was happening – but a colleague messaged me the previous day, suggesting that it might be relevant to my interests and also that the top brass would appreciate some kind of executive summary for the benefit of the Faculty. Despite the short notice I had most of the day free from other engagements, and since the agenda did indeed look interesting, I decided to register and check it out – hope this blog post is close enough to what the Dean had in mind! 

I liked the format of the event, a series of panel discussions rather than a series of presentations. Even the opening keynote with Oxford’s Sir Nigel Shadbolt was organised as a one-on-one chat between Sir Nigel and Tortoise’s James Harding, which felt more natural in an online environment than the traditional “one person speaks, everyone else listens, Q&A afterward” style. Something that worked particularly well was the parallel discussion on the chat, to which anyone attending the event could contribute and from which the moderators would from time to time pick questions or comments to be discussed with the main speakers. Overall, I was left with the feeling that this is the way forward with virtual events: design the format around the strengths of online instead of trying to replicate the format of an offline event using tools that are not (yet) all that great for such a purpose. 

The keynote set the tone for the rest of the event, bringing up a number of themes that would be discussed further in the upcoming sessions: the hype around AI versus the reality, transparency of AI algorithms and AI-based decision making, AI education – fostering AI talent in potential future professionals and data/algorithm literacy in the general populace – and the need for data architectures designed to respect the ethical rights of data subjects. Unhealthy power concentrations and how to avoid them was a topic that resonated with the audience, and it shouldn’t be too hard to think of a few examples of such concentrations. The carbon footprint of running AI software was brought up on the chat. Perhaps my favourite bit of the session was Sir Nigel’s point that there is a need for institutional and regulatory innovations, which he illustrated by way of analogy by mentioning the limited company as a historical example of an institutional innovation. Such innovations are perhaps more easily overlooked than scientific and technological ones, but one can hardly deny that they, too, have changed the world and will continue to do so.

The world according to Tortoise

The second session was about the new edition of the Tortoise Global AI Index, which ranks 62 countries of the world on their strength in AI capacity, defined as comprising the three pillars of implementation, innovation and investment. These are further divided into the seven sub-pillars of talent, infrastructure, operating environment, research, development, government strategy and commercial, and the overall score of each country is based on a total of 143 individual indicators. The scores are normalised such that the top country gets an overall score of 100, and it’s no big surprise that said country is the United States, as it was last year when the index was launched. China and the United Kingdom similarly retain their places as no. 2 and no. 3, respectively. China has closed some of the gap with the US but is still quite far behind with a score of 62, while the UK, sitting at around 40, has lost some of its edge over the challengers. Canada, Israel, Germany, the Netherlands, South Korea, France and Singapore complete the top 10. 

Finland is just out of the top 10 but rising, up three places from 14th to 11th. According to the index, Finland’s particular forte is government strategy, comprising indicators such as the existence of a national AI strategy signed by a senior member of government and the amount of dedicated spending aimed at building AI capacity. In this particular category Finland is ranked 5th in the world. Research (9th) and operating environment (11th) can also be counted among Finland’s strengths, and all of its other subrankings (talent – 16th, commercial – 19th, infrastructure – 21st, development – 22nd) are solidly above the median as well. Interestingly, the US, while being ranked 1st in four categories and in the top 10 for all but one, is only 44th on operating environment. The most heavily weighted indicator here is the level of data protection legislation, giving countries covered by the GDPR a bit of an edge; 7 of the top 10 in this category are indeed EU countries, but there is also, for instance, China in 6th place, so commitment to privacy is clearly not the whole story. 

There was some good discussion on the methodology of the AI index, such as the selection of indicators. For example, one could question the rather heavy bias toward LinkedIn as a source of indicators for AI talent. Another interesting point raised was that while we tend to consider academics mainly in terms of their affiliation, it might also be instructive to look at their nationality. Indeed, the hows and whys of the compilation of the index would easily make for a dedicated blog post, or even a series of posts, but I’ll leave it for others to produce a proper critique. For those who are interested, a methodology report is available online. 

From the Global AI Index the conversation transitioned smoothly into the next session on the geopolitics of AI, where one of the themes discussed was if countries should be viewed as competing against one another in AI, or if AI should rather be seen as an area of international collaboration for the benefit of citizens everywhere. Is there an AI race, like there once was a space race? Is mastery of AI a strategic consideration? Benedict Evans advocated the position that to talk about AI strategy is to adopt a wrong level of abstraction, and that AI (or rather machine learning) is just a particular way of creating software that in about ten years’ time will be like relational databases are today: so ubiquitous and mundane that we hardly pay any attention to it. This was in stark contrast to the view put forward in the beginning of the session that AI is a general-purpose technology akin to electricity, with comparable potential to revolutionise society. The session was largely dominated by this dialectic, but there was still time for other themes as well, such as the nature of AI clusters in a world where geographically limited technology clusters are becoming an outdated concept, and the role of so-called digital plumbing in providing the essential foundation for the success of today’s corporate AI power players.

Winners and losers

The next session, titled “AI’s ugly underbelly”, started by taking a look at an oft-forgotten part of the AI workforce, the people who label data so that it can be used to train machine learning models. It’s been estimated that data labelling accounts for 25% of the total project time in an ML project, but the labellers are, from the perspective of the company running the project, an anonymous mass employed through crowdsourcing platforms such as MTurk. In academic research the labellers are often found closer to home – the job is likely to be done by your students and/or yourself, and when crowdsourcing is used, people may well be willing to volunteer for the sake of contributing to science, such as in the case of the Zooniverse projects. In business it’s a different story, and there is some money to be made by labelling data for companies, but not a lot; it’s an unskilled job that obeys the logic of the gig economy, where the individual worker must buy their own equipment and has very little in the way of job security or career prospects. 

The subtitle of this session was “winners and losers of the workforce”, the winners of course being the highly skilled professionals who are in increasingly high demand and therefore increasingly highly paid. There was a good deal of discussion on the gender imbalance among such people, reflecting a similar imbalance in the distribution of the sort of hard (STEM) skills usually associated with tech jobs. In labelling the gap is apparently much narrower, in some countries even nonexistent. It was argued that relevant soft skills and potential AI talent are distributed considerably more evenly, and that companies trying to find people for AI-related roles may want to look beyond the traditional recruiting pathways for such roles. A minor point that I found thought-provoking was that recruiting is one of the application domains of AI, so the AI of today is involved in selecting the people who will build the AI of tomorrow – and we know, of course, that AI can be biased. One of the speakers brought up the analogy that training an AI is like training a dog in that the training may appear to be a success, but you cannot be sure of what it is that you’ve actually trained it to respond to. 

There was more talk about AI bias in the “AI you can trust” session, starting with what we mean by the term in the first place. We can all surely agree that AI should be fair, but can we agree on what kind of fairness we want – does it involve positive discrimination, for example? Bias in datasets is a relatively straightforward concept, but beyond that things get less tidy and more ambiguous. There is also the question of how we can trust that an AI is not biased, provided that we can agree on the definition; a suggested solution is to have algorithms audited by a third party, which could provide a way to strike a balance between the right of individuals to know what kind of decision-making processes they are being subjected to and the right of organisations to keep their algorithms confidential. An idea that I found particularly interesting, put forth by Carissa Véliz of the Institute for Ethics in AI, was that algorithms should be made to undergo a randomised controlled trial before they are allowed to make decisions that have a serious, potentially even ruinous, effect on people’s lives. 

Data protection was, of course, another big topic in this session. That personal data should be handled responsibly is again something we can all agree on, but there was a good deal of debate on what is the proper way to regulate companies to ensure that they are willing and able to shoulder that responsibility. Should they be told how to behave in a top-down manner, or is it better to adopt a bottom-up strategy and empower individuals to look after their own interests when it comes to privacy? Is self-regulation an option? The data subject rights guaranteed by the GDPR represent the bottom-up approach and are, in my opinion, a major step in the right direction, but it’s also a matter of having effective means to enforce those rights, and here, I feel, there is still a lot of work to be done. The GDPR, of course, only covers the countries of the EU and the EEA, and it was suggested that perhaps we need an international organisation for the harmonisation of data protection, a “UN of data” – a tall order for sure, but one worth considering.

Grand finale

The final session, titled “AI: the breakthroughs that will shape your life”, included several callbacks to themes discussed in previous sessions, such as the growth of the carbon footprint of AI as the computational cost of new breakthroughs continues to increase – doubling almost every 3 months according to an OpenAI statistic. The summit took place just days after the announcement of a great advance achieved by DeepMind’s AlphaFold AI in solving the protein folding problem in computational biochemistry, mentioned already in the beginning of the first session and discussed further here. While it was pointed out that the DeepMind solution is not necessarily the end-all it has been hailed as, it certainly serves to demonstrate that the technology is good for tackling serious scientific problems and not just for mastering board games. The subject of crowdsourcing came up again in this context, as the approach has been applied to the folding problem with some success in the form of Folding@home, where the home computers of volunteers are used to run distributed computations, as well as Foldit, a puzzle video game that essentially harnesses the volunteers’ brains to do the computations. 

There was some debate on the place of humans in a society increasingly permeated by AI systems, particularly on where we want to draw the line on AI autonomy and whether new jobs created by AI will be enough to compensate for old ones replaced by AI. Somewhat ironically, data labeller is a job created by AI that may already be on its way to being made obsolete by advances in AI techniques that do not require large quantities of labelled data for training. One of the speakers, Connecterra founder Yasir Khokhar, talked about the role of AI in solving the problem of feeding the world, reminding me of Risto Miikkulainen’s keynote talk at CEC 2019, in which he presented agriculture as one of the application domains of creative AI through evolutionary computation. OpenAI’s GPT-3 was then brought up as another example of a recent breakthrough, leading to a discussion on how we tend to anthropomorphise our Siris and Alexas and to ascribe human thought processes to entities that merely exhibit some semblance of them. There was a callback to AI ethics here when someone asked whether we have the right to know when we are interacting with an AI – if we’re concerned about AI transparency, then arguably being aware that there is an AI is the most basic level of it. Of things that are still in the future, the impact of quantum computing on AI was discussed, as were the age-old themes of artificial general intelligence and rogue AI as existential risk, but in the time available it wasn’t feasible to come to any real conclusions. 

Inevitably, it got harder to stay alert and focused as the afternoon wore on, and I also missed the beginning of one session because I had to attend another (albeit very brief) meeting, but even so, I managed to gather a good amount of interesting ideas and information over the course of the day. I’m particularly happy that I got a lot of material on the social implications of AI that we should be able to use when developing our upcoming AI ethics course, since so far I haven’t been too clear about specific topics related to this aspect of AI that we could discuss in the lectures. This wasn’t a week too soon, I might add – we’re due to start teaching that course in March, so it’s time to get cracking on the preparations!

Sweet freedom

The Midsummer celebrations are over, and the main holiday season is upon us. This is the first time since 2017 that I’m spending the whole summer in Finland, and I have to say it feels pretty sweet so far – they call Ireland the Emerald Isle, but we have plenty of shades of green of our own here, and the weather in June has been mostly gorgeous. Somewhat annoyingly, it looks like we’re due for the return of more traditional Finnish summer weather just as I’m about to start my vacation, but I’ll take it; I certainly prefer it to the sweaty +30°C days I had to endure toward the end of my summer holiday last year. Having access to my bike again has been a great joy, although I do kind of miss taking a commuter train to a random town or village and going exploring like I used to do in Dublin. I have been expanding my territory by trying out new routes and going further afield than before, but it doesn’t quite have the same sense of adventure to it. 

I was actually planning to travel to England this July; a band I became a big fan of during my tour of duty in Ireland was going to play a concert in Aylesbury near London and I bought myself a ticket pretty much as soon as they became available. Since I’ve never been to London, I thought I’d spend some time there, and I was also planning to visit Oxford as well as Bletchley Park in Milton Keynes, the place where Allied codebreakers (among them one Alan Turing) worked during WW2 – a sort of science and technology-themed pilgrimage, if you will. However, because of the pandemic the event has been postponed until an as yet unspecified date in 2021, and besides I don’t think going gallivanting around the UK would be very favourably looked upon anyway, so it’s just as well that I wasn’t an early bird with my travel arrangements. Better luck next year, I hope! 

In Finland the COVID situation seems to be pretty much under control for now, with only a couple dozen people receiving hospital care in the whole country; the figure peaked at just shy of 250 in early April. Life is steadily becoming less restricted, and the nationwide official recommendation to work remotely is being lifted as of the 1st of August. There’s no word yet on how this will affect university policy, but perhaps when July is over, we’ll be going back to the office. Strange thought – working from home really does feel like the new normal already! Of course the pandemic is far from over and there’s no telling when we’re going to be hit by another wave, so better keep that sourdough starter alive for lockdown part two.

The biggest thing I wanted to tick off my to-do list before switching into vacation mode was finishing and submitting the journal paper manuscript that will probably be the last thing I publish on the results of the KDD-CHASER project. With so much else going on, the paper took a while to get into shape for submission, but it’s now in the care of the good people of ACM Transactions on Social Computing, so there’s one thing I (presumably) won’t have to think about until autumn. The notification for my CIKM paper is due on July 17th, but the camera-ready submission deadline is a whole month after that, so if the paper does get accepted, I shouldn’t need to do anything about it while I’m on leave. 

Something that was only very recently set in motion but that I’m quite excited about is a new study course on AI ethics that I’ve started developing with a couple of colleagues after one of them suggested it, knowing that I’m interested in the subject and have some research background in it. I’ll admit I’m slightly worried about exactly how much extra work I’m taking upon myself, but I have a lot of ideas already, and it should make a nice merit to put in my academic CV. The main thing to keep in mind is that we teach engineering, not philosophy, so we want to keep the scope of the course relatively narrow and down-to-earth: we’ll leave debating AI rights to the more qualified and stick to issues that are relevant to today’s practitioners. After two weeks and three meetings we have a pretty good tentative plan already and will get back to the task of fleshing it out in August. 

On the matter of the Academy of Finland September call I’m still undecided. Should I have another go at the Research Fellow grant? I’m not ruling it out yet, but I’m not going to simply rehash the same basic idea, that much seems clear by now. Last year my proposal in a nutshell was “do what I did in Dublin, scaled up”; that made it relatively easy to write, but in retrospect, and other weaknesses aside, it wasn’t a very novel or ambitious plan from the reviewers’ perspective nor even all that exciting from my own perspective. Of course it still makes sense that I’d build on the results of my MSCA fellowship, but I’ll need to do better than follow it up with more of the same. Currently I only have some fairly vague ideas about what that would mean in terms of writing an actual proposal, but there’s still time to find that inspiration, and I’m pretty sure that the upcoming time off is not going to hurt. 

Job security

There’s an old joke about how you can distinguish between theoretical and practical philosophy: if your degree is in practical philosophy, there are practically no jobs available for you, whereas if it’s in theoretical philosophy, it’s not even theoretically possible for you to find a job. I was reminded of this the other day when I was having a lunchtime chat with a colleague who had recently learned of the existence of a vending machine that bakes and dispenses pizzas on request. From this the conversation moved to the broader theme of machines, and particularly artificial intelligence, taking over jobs that previously only humans could perform, such as those that involve designing artefacts.

A specific job that my colleague brought up was architect: how far away are we from the situation where you can just tell an AI to design a building for a given purpose within given parameters and a complete set of plans will come out? This example is interesting, because in architecture – in some architecture at any rate – engineering meets art: the outcome of the process represents a synthesis of practical problem-solving and creative expression, functionality and beauty. Algorithms are good at exploring solution spaces for quantifiable problems, but quantifying the qualities that a work of art is traditionally expected to exhibit is challenging to say the least. Granted, it’s a bit of a cliché, but how exactly does one measure something as abstract as beauty or elegance?

If we follow this train of thought to its logical conclusion, then it would seem that the last jobs to go would be the ones driven entirely by self-expression: painter, sculptor, writer, composer, actor, singer, comedian… Athlete, too – we still want to see humans perform feats of strength, speed and skill even though a robot could easily outdo the best of us at many of them. In a sense, these might be the only jobs that never can be completely taken over by machines, because potentially every human individual has something totally unique to express (unless we eventually give up our individuality altogether and meld into some kind of collective superconsciousness). However, it’s debatable if the concept of a job would any longer have a recognisable meaning in the kind of post-scarcity utopia seemingly implied by this scenario.

Coming back closer to the present day and my own research on collaborative knowledge discovery, I have actually given some (semi-)serious thought to the idea that one day, perhaps in the not-too-far future, some of the partners in your collaboration may be AI agents instead of human experts. As AIs become capable of handling more and more complex tasks independently, the role of humans in the process shifts toward the determination of what tasks need doing in the first place. Applying AI in the future may therefore be less like engineering and more like management, requiring a skill set that’s rather different from the one required today.

So what do managers do? For one thing, they take responsibility for decisions. Why is this relevant? The case of self-driving cars comes to mind. From a purely utilitarian perspective, autopilots should replace human drivers as soon as it can be shown beyond reasonable doubt that they would make roads safer, but while the possibility remains that an autopilot will make a bad call leading to damage or injury, there are other points of view to consider. Being on the road is always a risk, and it seems to me that our acceptance of that risk is at least partially based on an understanding of the behaviour of the other people we share the road with – a kind of informed consent, so to speak. If an increasing percentage of those other people is replaced by AIs whose decision-making processes may differ radically from those of human drivers, does there come a point where we no longer understand the nature of the risk well enough for our consent to be genuinely informed? Would people prefer a risk that’s statistically higher if they feel more confident about their ability to manage it?

On the other side of the responsibility equation there is the question of who is in fact liable when something bad happens. When it’s all humans making the decisions, we have established processes for finding this out, but things get more complicated when there’s algorithmic decision-making involved, and I would assume that the more severe the damage, the less happy people are going to be to accept a conclusion that nobody’s liable because it was the algorithm’s fault and you can’t prosecute an algorithm. In response to these concerns, the concepts of algorithmic transparency and accountability have been introduced, elements of which can already be seen in enacted or proposed legislation such as the GDPR and the U.S. Algorithmic Accountability Act.

This might seem to be pointing toward a rather bleak future where the only “serious” professional role left for humans is taking the blame when something goes wrong, but I’m more hopeful than that. What else do managers do? They set goals, and I would argue that in a human society this is something that only humans can do, no matter how advanced the technology we have at our disposal for pursuing those goals, because it’s a matter of values, not means. Similarly, it’s ultimately determined by human values whether a given course of action, no matter how effective it would be in achieving a goal, is ethically permissible. In science, for example, we may eventually reach a point where an AI, given a research question, is capable of designing experiments, carrying them out and evaluating the results all by itself, but this still leaves vacancies for people whose job it is to decide what questions are worth asking and how far we are willing to go to get the answers.

Perhaps it’s the philosophers who will have the last laugh after all?