Summing up the AI summit

The end of the year is approaching fast, with Christmas now barely two weeks away, but I managed to fit in one more virtual event to top off this year of virtual events: the Tortoise Global AI Summit. To be quite honest, I wasn’t actually planning to attend – didn’t even know it was happening – but a colleague messaged me the previous day, suggesting that it might be relevant to my interests and also that the top brass would appreciate some kind of executive summary for the benefit of the Faculty. Despite the short notice I had most of the day free from other engagements, and since the agenda did indeed look interesting, I decided to register and check it out – hope this blog post is close enough to what the Dean had in mind! 

I liked the format of the event, a series of panel discussions rather than a series of presentations. Even the opening keynote with Oxford’s Sir Nigel Shadbolt was organised as a one-on-one chat between Sir Nigel and Tortoise’s James Harding, which felt more natural in an online environment than the traditional “one person speaks, everyone else listens, Q&A afterward” style. Something that worked particularly well was the parallel discussion on the chat, to which anyone attending the event could contribute and from which the moderators would from time to time pick questions or comments to be discussed with the main speakers. Overall, I was left with the feeling that this is the way forward with virtual events: design the format around the strengths of online instead of trying to replicate the format of an offline event using tools that are not (yet) all that great for such a purpose. 

The keynote set the tone for the rest of the event, bringing up a number of themes that would be discussed further in the upcoming sessions: the hype around AI versus the reality, transparency of AI algorithms and AI-based decision making, AI education – fostering AI talent in potential future professionals and data/algorithm literacy in the general populace – and the need for data architectures designed to respect the ethical rights of data subjects. Unhealthy power concentrations and how to avoid them was a topic that resonated with the audience, and it shouldn’t be too hard to think of a few examples of such concentrations. The carbon footprint of running AI software was brought up on the chat. Perhaps my favourite bit of the session was Sir Nigel’s point that there is a need for institutional and regulatory innovations, which he illustrated by way of analogy by mentioning the limited company as a historical example of an institutional innovation. Such innovations are perhaps more easily overlooked than scientific and technological ones, but one can hardly deny that they, too, have changed the world and will continue to do so.

The world according to Tortoise

The second session was about the new edition of the Tortoise Global AI Index, which ranks 62 countries of the world on their strength in AI capacity, defined as comprising the three pillars of implementation, innovation and investment. These are further divided into the seven sub-pillars of talent, infrastructure, operating environment, research, development, government strategy and commercial, and the overall score of each country is based on a total of 143 individual indicators. The scores are normalised such that the top country gets an overall score of 100, and it’s no big surprise that said country is the United States, as it was last year when the index was launched. China and the United Kingdom similarly retain their places as no. 2 and no. 3, respectively. China has closed some of the gap with the US but is still quite far behind with a score of 62, while the UK, sitting at around 40, has lost some of its edge over the challengers. Canada, Israel, Germany, the Netherlands, South Korea, France and Singapore complete the top 10. 

Finland is just out of the top 10 but rising, up three places from 14th to 11th. According to the index, Finland’s particular forte is government strategy, comprising indicators such as the existence of a national AI strategy signed by a senior member of government and the amount of dedicated spending aimed at building AI capacity. In this particular category Finland is ranked 5th in the world. Research (9th) and operating environment (11th) can also be counted among Finland’s strengths, and all of its other subrankings (talent – 16th, commercial – 19th, infrastructure – 21st, development – 22nd) are solidly above the median as well. Interestingly, the US, while being ranked 1st in four categories and in the top 10 for all but one, is only 44th on operating environment. The most heavily weighted indicator here is the level of data protection legislation, giving countries covered by the GDPR a bit of an edge; 7 of the top 10 in this category are indeed EU countries, but there is also, for instance, China in 6th place, so commitment to privacy is clearly not the whole story. 

There was some good discussion on the methodology of the AI index, such as the selection of indicators. For example, one could question the rather heavy bias toward LinkedIn as a source of indicators for AI talent. Another interesting point raised was that while we tend to consider academics mainly in terms of their affiliation, it might also be instructive to look at their nationality. Indeed, the hows and whys of the compilation of the index would easily make for a dedicated blog post, or even a series of posts, but I’ll leave it for others to produce a proper critique. For those who are interested, a methodology report is available online. 

From the Global AI Index the conversation transitioned smoothly into the next session on the geopolitics of AI, where one of the themes discussed was if countries should be viewed as competing against one another in AI, or if AI should rather be seen as an area of international collaboration for the benefit of citizens everywhere. Is there an AI race, like there once was a space race? Is mastery of AI a strategic consideration? Benedict Evans advocated the position that to talk about AI strategy is to adopt a wrong level of abstraction, and that AI (or rather machine learning) is just a particular way of creating software that in about ten years’ time will be like relational databases are today: so ubiquitous and mundane that we hardly pay any attention to it. This was in stark contrast to the view put forward in the beginning of the session that AI is a general-purpose technology akin to electricity, with comparable potential to revolutionise society. The session was largely dominated by this dialectic, but there was still time for other themes as well, such as the nature of AI clusters in a world where geographically limited technology clusters are becoming an outdated concept, and the role of so-called digital plumbing in providing the essential foundation for the success of today’s corporate AI power players.

Winners and losers

The next session, titled “AI’s ugly underbelly”, started by taking a look at an oft-forgotten part of the AI workforce, the people who label data so that it can be used to train machine learning models. It’s been estimated that data labelling accounts for 25% of the total project time in an ML project, but the labellers are, from the perspective of the company running the project, an anonymous mass employed through crowdsourcing platforms such as MTurk. In academic research the labellers are often found closer to home – the job is likely to be done by your students and/or yourself, and when crowdsourcing is used, people may well be willing to volunteer for the sake of contributing to science, such as in the case of the Zooniverse projects. In business it’s a different story, and there is some money to be made by labelling data for companies, but not a lot; it’s an unskilled job that obeys the logic of the gig economy, where the individual worker must buy their own equipment and has very little in the way of job security or career prospects. 

The subtitle of this session was “winners and losers of the workforce”, the winners of course being the highly skilled professionals who are in increasingly high demand and therefore increasingly highly paid. There was a good deal of discussion on the gender imbalance among such people, reflecting a similar imbalance in the distribution of the sort of hard (STEM) skills usually associated with tech jobs. In labelling the gap is apparently much narrower, in some countries even nonexistent. It was argued that relevant soft skills and potential AI talent are distributed considerably more evenly, and that companies trying to find people for AI-related roles may want to look beyond the traditional recruiting pathways for such roles. A minor point that I found thought-provoking was that recruiting is one of the application domains of AI, so the AI of today is involved in selecting the people who will build the AI of tomorrow – and we know, of course, that AI can be biased. One of the speakers brought up the analogy that training an AI is like training a dog in that the training may appear to be a success, but you cannot be sure of what it is that you’ve actually trained it to respond to. 

There was more talk about AI bias in the “AI you can trust” session, starting with what we mean by the term in the first place. We can all surely agree that AI should be fair, but can we agree on what kind of fairness we want – does it involve positive discrimination, for example? Bias in datasets is a relatively straightforward concept, but beyond that things get less tidy and more ambiguous. There is also the question of how we can trust that an AI is not biased, provided that we can agree on the definition; a suggested solution is to have algorithms audited by a third party, which could provide a way to strike a balance between the right of individuals to know what kind of decision-making processes they are being subjected to and the right of organisations to keep their algorithms confidential. An idea that I found particularly interesting, put forth by Carissa Véliz of the Institute for Ethics in AI, was that algorithms should be made to undergo a randomised controlled trial before they are allowed to make decisions that have a serious, potentially even ruinous, effect on people’s lives. 

Data protection was, of course, another big topic in this session. That personal data should be handled responsibly is again something we can all agree on, but there was a good deal of debate on what is the proper way to regulate companies to ensure that they are willing and able to shoulder that responsibility. Should they be told how to behave in a top-down manner, or is it better to adopt a bottom-up strategy and empower individuals to look after their own interests when it comes to privacy? Is self-regulation an option? The data subject rights guaranteed by the GDPR represent the bottom-up approach and are, in my opinion, a major step in the right direction, but it’s also a matter of having effective means to enforce those rights, and here, I feel, there is still a lot of work to be done. The GDPR, of course, only covers the countries of the EU and the EEA, and it was suggested that perhaps we need an international organisation for the harmonisation of data protection, a “UN of data” – a tall order for sure, but one worth considering.

Grand finale

The final session, titled “AI: the breakthroughs that will shape your life”, included several callbacks to themes discussed in previous sessions, such as the growth of the carbon footprint of AI as the computational cost of new breakthroughs continues to increase – doubling almost every 3 months according to an OpenAI statistic. The summit took place just days after the announcement of a great advance achieved by DeepMind’s AlphaFold AI in solving the protein folding problem in computational biochemistry, mentioned already in the beginning of the first session and discussed further here. While it was pointed out that the DeepMind solution is not necessarily the end-all it has been hailed as, it certainly serves to demonstrate that the technology is good for tackling serious scientific problems and not just for mastering board games. The subject of crowdsourcing came up again in this context, as the approach has been applied to the folding problem with some success in the form of Folding@home, where the home computers of volunteers are used to run distributed computations, as well as Foldit, a puzzle video game that essentially harnesses the volunteers’ brains to do the computations. 

There was some debate on the place of humans in a society increasingly permeated by AI systems, particularly on where we want to draw the line on AI autonomy and whether new jobs created by AI will be enough to compensate for old ones replaced by AI. Somewhat ironically, data labeller is a job created by AI that may already be on its way to being made obsolete by advances in AI techniques that do not require large quantities of labelled data for training. One of the speakers, Connecterra founder Yasir Khokhar, talked about the role of AI in solving the problem of feeding the world, reminding me of Risto Miikkulainen’s keynote talk at CEC 2019, in which he presented agriculture as one of the application domains of creative AI through evolutionary computation. OpenAI’s GPT-3 was then brought up as another example of a recent breakthrough, leading to a discussion on how we tend to anthropomorphise our Siris and Alexas and to ascribe human thought processes to entities that merely exhibit some semblance of them. There was a callback to AI ethics here when someone asked whether we have the right to know when we are interacting with an AI – if we’re concerned about AI transparency, then arguably being aware that there is an AI is the most basic level of it. Of things that are still in the future, the impact of quantum computing on AI was discussed, as were the age-old themes of artificial general intelligence and rogue AI as existential risk, but in the time available it wasn’t feasible to come to any real conclusions. 

Inevitably, it got harder to stay alert and focused as the afternoon wore on, and I also missed the beginning of one session because I had to attend another (albeit very brief) meeting, but even so, I managed to gather a good amount of interesting ideas and information over the course of the day. I’m particularly happy that I got a lot of material on the social implications of AI that we should be able to use when developing our upcoming AI ethics course, since so far I haven’t been too clear about specific topics related to this aspect of AI that we could discuss in the lectures. This wasn’t a week too soon, I might add – we’re due to start teaching that course in March, so it’s time to get cracking on the preparations!

Heart of darkness

The news came in yesterday that the university is extending its current policy of remote work and teaching, previously effective until the end of 2020, to the end of May, 2021. Not a huge shock, frankly; it’s what my money would have been on, and I wrote as much yesterday when I was drafting this post, before the announcement came. It doesn’t really change any plans either, since we’ve been assuming from the get-go that our AI ethics course, due to be lectured in the second period of the spring term, will be taught remotely. Still, it’s strange to think that by the end of this latest extension, we’ll have been working from home for more than a year without interruption – and of course there’s no guarantee that things will be back to normal even then, although one may hope that at least some of us will have been vaccinated already. In the meantime, I’ll be getting my flu shot for the coming winter, courtesy of occupational healthcare. 

Speaking of winter, it’s almost November, and as the days grow shorter, I’m reminded of the one redeeming feature of the dreary Irish winter in comparison with the Finnish one: more daylight. Last year and the year before, I “cheated” and only came to Finland for the end-of-year holidays, not long enough to really feel the effects of prolonged darkness – especially since I wasn’t working during the time I spent here and therefore could sleep for as long as I wished. Now, however, I’ve already noticed that it’s getting more laborious to get myself up and running in the morning, and while the turning of the clocks on Sunday brought some temporary relief by making mornings somewhat brighter, it’s not going to last long.

Fortunately, working from home has rendered the concept of office hours even less relevant than it was before the pandemic. I was free to choose my own hours before, but there was still a fairly strong preference to be at the office at more or less the same times as my colleagues, for the social aspect if not for anything else. Now that there’s basically nothing to be gained from being together at the “office” (i.e. at our computers in our respective homes), I’ve gone to sleeping according to what I presume is my natural rhythm, which I suppose cannot be a bad thing healthwise. There are still the meetings, of course, but I’ve mostly managed to avoid having them so early in the morning that I couldn’t trust myself to wake up for them without setting an alarm, although I’m not sure how that’s going to work out when we get to winter proper and there’s barely any daylight at all. 

Before the all-staff email yesterday, I was already thinking that if we do go back to working on campus after New Year, I may well continue to take remote days more frequently than I used to, at least during the winter and especially when it’s very cold. As much as I love a good northern winter with lots of snow, I don’t particularly relish temperatures closer to minus twenty than minus ten, and when you combine that with pitch darkness in the morning, the thought of staying in bed is very tempting. So, once in a while, why not just do that, get up when you actually feel up for it and work from home, since that’s now officially sanctioned by university policy? 

I participated in my very first virtual conference last week, the one-day Conference on Technology Ethics (formerly Seminar on Technology Ethics) organised by the Future Ethics research group at the University of Turku. I didn’t present anything, but the event was free of charge and I figured I might come away with some fresh ideas for the AI ethics course and perhaps even for my research. The conference did not disappoint – particularly the keynote talks by Maija-Riitta Ollila and Bernd Carsten Stahl were very much the sort of thing I was hoping for, and I think I’ll be referring back to them when I get to the work of creating my lecture materials. Everything went reasonably smoothly too, although there were some technical issues with screen sharing on Zoom. There was even a virtual conference dinner in the evening, but I didn’t participate so I don’t know how that worked out in practice. 

The next online event I’m looking forward to is a cultural one: the Virtual Irish Festival of Oulu! As the organisers put it, it’s the first, and optimistically also the last, of its kind: under normal circumstances the festival would have been in the beginning of October and very much non-virtual, taking place in various venues around town and offering music, dance, theatre, cinema, storytelling and workshops over a period of five days. I’m rather annoyed that there’s no proper live festival this year, since I missed the last two – this may seem like a silly thing to complain about, considering the reason I missed them is that I was in actual Ireland, but it’s not like they have trad festivals there all the time. Still, a virtual festival is surely better than no festival at all, and the programme looks very promising, so I’ll definitely be tuning in, and I think I’ll buy the €5 optional virtual ticket as well, to support the cause. 

You’ve changed, man

I’ve been back at work after my summer vacation for about a month now, so I guess it’s about time I got back into blogging as well. Not that there’s a whole lot of news – I’m still doing the vast majority of my work in my living room and only visiting the campus sporadically. Frankly, I would have expected things to be closer to normal by now, but perhaps we first need to figure out what is normal anyway (hat tip to The Hitchhiker’s Guide to the Galaxy). The university’s playing it safe and recommending not just working remotely but also wearing a mask now, if you’re going to come to the campus and do anything other than sit in your office. My closest colleagues and I are doing our best to keep the social group tight: constant WhatsApp chatter, weekly lunches and virtual coffee mornings, the occasional face-to-face meeting. 

Naturally working remotely means that we’ll also be teaching remotely, which affects me since we’re running our Towards Data Mining course in period 1. While I was in Ireland, my lecture – a hodgepodge of ethics, data security and data management topics – was handled by a colleague, but when I came back this year I took over from her again. The aforementioned colleague also recorded my part of the series of lecture videos used in lieu of live lectures when we ran the course in the spring term, so I was there basically just to mark exercise reports and exam answers. In the autumn term we were planning to lecture the course the traditional way, but now that that’s not an option, we’re going to present the lectures on Zoom instead. 

I’ve said before that I’m not overly keen on lecturing, and I’m not at all sure if doing it online will make things better or worse. On the one hand, I suppose it should be easier to stay relaxed when I can do the lecture from the comfort of my home, but on the other hand, I think it may feel somewhat unnatural to be addressing an audience while essentially talking to myself, unable to gauge if the students are paying any attention to what I’m saying. Online meetings I’ve grown used to, but those are much more interactive and therefore not really the same thing. It doesn’t exactly help that I haven’t given that lecture in three years, so that would add to my nervousness even if nothing had changed in the meantime. 

The new AI ethics course has taken a step forward, a formal proposal for a pilot run next spring has been prepared and submitted to the Faculty. With the two courses plus a bunch of Master’s theses to supervise, I feel like my job has recently been more about teaching than about research. Not that I mind, really – it’s all meaningful work, and all part of why I’ve always held universities in very high esteem as places devoted to the creation, curation and distribution of the best of human knowledge. Obviously teaching and research require substantially different skill sets and therefore being good at one does not imply being good at the other, but that doesn’t mean it’s a good idea to treat these core functions of a university as if they are two completely separate domains rather than two sides of the same coin. 

When I started this blog, I said its theme would be knowledge, and I seem to have circled back to that even though I wasn’t really planning to. I’m a firm believer in the intrinsic value of knowledge, and passing on the knowledge you have is an essential part of maximising that value, just as important as creating new knowledge. On a more personal and subjective level, I’ve always found great joy in learning or figuring out things I didn’t know before, and if I can help others feel that same joy, so much the better. I still doubt that I’d be very happy in an all-teaching role, but I’ve come to view teaching as a natural part of the job, something I can find satisfaction in and also something I can make a steady contribution in while research has its ups and downs. It’s not that many years ago that I saw teaching mainly as a nuisance to be avoided, so I guess it’s fair to say I’ve changed! 

Sweet freedom

The Midsummer celebrations are over, and the main holiday season is upon us. This is the first time since 2017 that I’m spending the whole summer in Finland, and I have to say it feels pretty sweet so far – they call Ireland the Emerald Isle, but we have plenty of shades of green of our own here, and the weather in June has been mostly gorgeous. Somewhat annoyingly, it looks like we’re due for the return of more traditional Finnish summer weather just as I’m about to start my vacation, but I’ll take it; I certainly prefer it to the sweaty +30°C days I had to endure toward the end of my summer holiday last year. Having access to my bike again has been a great joy, although I do kind of miss taking a commuter train to a random town or village and going exploring like I used to do in Dublin. I have been expanding my territory by trying out new routes and going further afield than before, but it doesn’t quite have the same sense of adventure to it. 

I was actually planning to travel to England this July; a band I became a big fan of during my tour of duty in Ireland was going to play a concert in Aylesbury near London and I bought myself a ticket pretty much as soon as they became available. Since I’ve never been to London, I thought I’d spend some time there, and I was also planning to visit Oxford as well as Bletchley Park in Milton Keynes, the place where Allied codebreakers (among them one Alan Turing) worked during WW2 – a sort of science and technology-themed pilgrimage, if you will. However, because of the pandemic the event has been postponed until an as yet unspecified date in 2021, and besides I don’t think going gallivanting around the UK would be very favourably looked upon anyway, so it’s just as well that I wasn’t an early bird with my travel arrangements. Better luck next year, I hope! 

In Finland the COVID situation seems to be pretty much under control for now, with only a couple dozen people receiving hospital care in the whole country; the figure peaked at just shy of 250 in early April. Life is steadily becoming less restricted, and the nationwide official recommendation to work remotely is being lifted as of the 1st of August. There’s no word yet on how this will affect university policy, but perhaps when July is over, we’ll be going back to the office. Strange thought – working from home really does feel like the new normal already! Of course the pandemic is far from over and there’s no telling when we’re going to be hit by another wave, so better keep that sourdough starter alive for lockdown part two.

The biggest thing I wanted to tick off my to-do list before switching into vacation mode was finishing and submitting the journal paper manuscript that will probably be the last thing I publish on the results of the KDD-CHASER project. With so much else going on, the paper took a while to get into shape for submission, but it’s now in the care of the good people of ACM Transactions on Social Computing, so there’s one thing I (presumably) won’t have to think about until autumn. The notification for my CIKM paper is due on July 17th, but the camera-ready submission deadline is a whole month after that, so if the paper does get accepted, I shouldn’t need to do anything about it while I’m on leave. 

Something that was only very recently set in motion but that I’m quite excited about is a new study course on AI ethics that I’ve started developing with a couple of colleagues after one of them suggested it, knowing that I’m interested in the subject and have some research background in it. I’ll admit I’m slightly worried about exactly how much extra work I’m taking upon myself, but I have a lot of ideas already, and it should make a nice merit to put in my academic CV. The main thing to keep in mind is that we teach engineering, not philosophy, so we want to keep the scope of the course relatively narrow and down-to-earth: we’ll leave debating AI rights to the more qualified and stick to issues that are relevant to today’s practitioners. After two weeks and three meetings we have a pretty good tentative plan already and will get back to the task of fleshing it out in August. 

On the matter of the Academy of Finland September call I’m still undecided. Should I have another go at the Research Fellow grant? I’m not ruling it out yet, but I’m not going to simply rehash the same basic idea, that much seems clear by now. Last year my proposal in a nutshell was “do what I did in Dublin, scaled up”; that made it relatively easy to write, but in retrospect, and other weaknesses aside, it wasn’t a very novel or ambitious plan from the reviewers’ perspective nor even all that exciting from my own perspective. Of course it still makes sense that I’d build on the results of my MSCA fellowship, but I’ll need to do better than follow it up with more of the same. Currently I only have some fairly vague ideas about what that would mean in terms of writing an actual proposal, but there’s still time to find that inspiration, and I’m pretty sure that the upcoming time off is not going to hurt. 

Job security

There’s an old joke about how you can distinguish between theoretical and practical philosophy: if your degree is in practical philosophy, there are practically no jobs available for you, whereas if it’s in theoretical philosophy, it’s not even theoretically possible for you to find a job. I was reminded of this the other day when I was having a lunchtime chat with a colleague who had recently learned of the existence of a vending machine that bakes and dispenses pizzas on request. From this the conversation moved to the broader theme of machines, and particularly artificial intelligence, taking over jobs that previously only humans could perform, such as those that involve designing artefacts.

A specific job that my colleague brought up was architect: how far away are we from the situation where you can just tell an AI to design a building for a given purpose within given parameters and a complete set of plans will come out? This example is interesting, because in architecture – in some architecture at any rate – engineering meets art: the outcome of the process represents a synthesis of practical problem-solving and creative expression, functionality and beauty. Algorithms are good at exploring solution spaces for quantifiable problems, but quantifying the qualities that a work of art is traditionally expected to exhibit is challenging to say the least. Granted, it’s a bit of a cliché, but how exactly does one measure something as abstract as beauty or elegance?

If we follow this train of thought to its logical conclusion, then it would seem that the last jobs to go would be the ones driven entirely by self-expression: painter, sculptor, writer, composer, actor, singer, comedian… Athlete, too – we still want to see humans perform feats of strength, speed and skill even though a robot could easily outdo the best of us at many of them. In a sense, these might be the only jobs that never can be completely taken over by machines, because potentially every human individual has something totally unique to express (unless we eventually give up our individuality altogether and meld into some kind of collective superconsciousness). However, it’s debatable if the concept of a job would any longer have a recognisable meaning in the kind of post-scarcity utopia seemingly implied by this scenario.

Coming back closer to the present day and my own research on collaborative knowledge discovery, I have actually given some (semi-)serious thought to the idea that one day, perhaps in the not-too-far future, some of the partners in your collaboration may be AI agents instead of human experts. As AIs become capable of handling more and more complex tasks independently, the role of humans in the process shifts toward the determination of what tasks need doing in the first place. Applying AI in the future may therefore be less like engineering and more like management, requiring a skill set that’s rather different from the one required today.

So what do managers do? For one thing, they take responsibility for decisions. Why is this relevant? The case of self-driving cars comes to mind. From a purely utilitarian perspective, autopilots should replace human drivers as soon as it can be shown beyond reasonable doubt that they would make roads safer, but while the possibility remains that an autopilot will make a bad call leading to damage or injury, there are other points of view to consider. Being on the road is always a risk, and it seems to me that our acceptance of that risk is at least partially based on an understanding of the behaviour of the other people we share the road with – a kind of informed consent, so to speak. If an increasing percentage of those other people is replaced by AIs whose decision-making processes may differ radically from those of human drivers, does there come a point where we no longer understand the nature of the risk well enough for our consent to be genuinely informed? Would people prefer a risk that’s statistically higher if they feel more confident about their ability to manage it?

On the other side of the responsibility equation there is the question of who is in fact liable when something bad happens. When it’s all humans making the decisions, we have established processes for finding this out, but things get more complicated when there’s algorithmic decision-making involved, and I would assume that the more severe the damage, the less happy people are going to be to accept a conclusion that nobody’s liable because it was the algorithm’s fault and you can’t prosecute an algorithm. In response to these concerns, the concepts of algorithmic transparency and accountability have been introduced, elements of which can already be seen in enacted or proposed legislation such as the GDPR and the U.S. Algorithmic Accountability Act.

This might seem to be pointing toward a rather bleak future where the only “serious” professional role left for humans is taking the blame when something goes wrong, but I’m more hopeful than that. What else do managers do? They set goals, and I would argue that in a human society this is something that only humans can do, no matter how advanced the technology we have at our disposal for pursuing those goals, because it’s a matter of values, not means. Similarly, it’s ultimately determined by human values whether a given course of action, no matter how effective it would be in achieving a goal, is ethically permissible. In science, for example, we may eventually reach a point where an AI, given a research question, is capable of designing experiments, carrying them out and evaluating the results all by itself, but this still leaves vacancies for people whose job it is to decide what questions are worth asking and how far we are willing to go to get the answers.

Perhaps it’s the philosophers who will have the last laugh after all?

New Zealand story

I’m back in Dublin from my two-week expedition to New Zealand, the main reason for which was (ostensibly) to attend the IEEE Congress on Evolutionary Computation in Wellington. I’ve been back since Saturday actually, so by now the worst of the jet lag is behind me and it’s time to do a write-up of my doings and dealings down under. Besides NZ, I had the opportunity to pay a quick visit to Australia as well, since I had a stopover in Sydney that lasted from 6am to 6pm – plenty of time to catch a train from the airport to Circular Quay and snap some smug selfies with the famous opera house prominently in the background.

Having the long break between flights in Sydney proved a good decision, because even though the final hop from Sydney to Wellington was a relatively short one, by this point I had already flown seven and a half hours from Dublin to Dubai, followed by a two-hour stopover before the connecting flight to Sydney, which was just shy of fourteen hours. As a result of all this I wasn’t in much of a mood to do any more flying until I was well and truly rid of the stiffness of body and mind that comes from spending 20-plus hours seated inside a cramped aluminium tube in the sky, and a few hours of sightseeing on foot on what turned out to be a pleasantly warm and sunny day helped a great deal in achieving that. Another move I thanked myself for was having purchased access to the Qantas business lounge at Sydney airport, allowing me to enjoy such welcome luxuries as a comfy chair, a barista-made espresso and a nice shower before facing the world outside.

With the combined effect of the flight and transfer times and the 11-hour time difference, I arrived in Wellington near midnight on the evening of Sunday the 9th, having departed from Dublin on Friday evening. Monday the 10th was the first day of the conference, but it was all tutorials and workshops, none of which were particularly relevant to my own research, so I gave myself permission to sleep in and recharge before attempting anything resembling work. In fact the only “conference sessions” I attended on that first day were lunch and afternoon coffee; the rest of the time I spent at the venue I just wandered around Te Papa, exploring the national museum’s fascinating exhibitions on the nature, culture and history of New Zealand.

On the second day I began to feel the effects of jet lag for real, but I thought it was time to be a good soldier and check out some presentations. Although I don’t really do evolutionary computation myself, it has various applications that interest me professionally or personally, so it wasn’t too hard to find potentially interesting sessions in the programme. The highlight of the day for me was a session on games where there was, among others, a paper on evolving an AI to play a partially observable variant of Ms. Pac-Man; being a bit of a retrogaming geek, I found it quite heartwarming that this is an actual topic of serious academic research!

On the third day I forced myself to get up early enough to hear the plenary talk of Prof. Risto Miikkulainen, titled “Creative AI through Evolutionary Computation”. I was especially looking forward to this talk, and I was not disappointed: Prof. Miikkulainen built a good case for machine creativity as the next big step in AI and for the crucial role of evolutionary computation in it, with a variety of interesting supporting examples of successful applications. I am inclined to agree with the audience member who remarked that the conclusions of the talk were rather optimistic – it’s quite a leap from optimising website designs to optimising the governance of entire societies – but even so, a highly enjoyable presentation. Later that day there was a special session on music, art and creativity, which I also attended, but my enjoyment of it was hampered by my being in acute need of a nap at this point.

The fourth and final day of the conference I mostly spent preparing for my own presentation, which was in the special session on ethics and social implications of computational intelligence. This took place in the late afternoon, so the conference was almost over and attendance in the session was predictably unimpressive: I counted ten people, including myself and the session chair. Fortunately, numbers aren’t everything, and there was some good discussion with the audience after my talk, which dealt with wearable self-tracking devices and the problems that arise from the non-transparency of the information they generate and the limited ability of users to control their own data. I also talked about the problems and potential social impact of analysing self-tracking data collaboratively, tying the paper up with the work I’m doing in the KDD-CHASER project.

After the conference I proceeded to have a week’s vacation in NZ, which of course was the real reason I went to all the trouble of getting myself over there. While it’s not a huge country – somewhat smaller than my native Finland in terms of both area and population – I still had to make some tough choices when deciding what to see and do there, and I came to the conclusion that it was best to focus on what the North Island has to offer. I rode the Northern Explorer train service to Auckland and spent three nights there before working my way back to Wellington by bus, stopping along the way to spend two nights in Rotorua. From Wellington I did a day trip by ferry to Picton, a small town in the Marlborough Region (of Sauvignon blanc fame) of the South Island.

On Friday, two weeks after my departure from Dublin, I started my return journey, this time via Melbourne and with no time to go dilly-dallying outside the airport between flights. I boarded my flight in Wellington feeling a little sad to be leaving NZ so soon, but also satisfied that I’d made the most of my time there. I might have been able to fit in some additional activities if I’d travelled by air instead of overland, perhaps even another city, but I like to be able to view the scenery when I’m travelling, and there was no shortage of pretty sights along the train and bus routes. The conference also left a positive feeling: the programme was interesting, the catering was great and the choice of venue just brilliant. Above all, I’m happy to be done with all the flying!

Far side of the world

Things are getting quite busy again, as the project has come to a stage where I need to be producing some publications on early results while also doing implementation work to get more solid results, not to mention thinking seriously about where my next slice of funding is going to come from. Any one of these could consume all of my available time if I allowed it to, and it’s not always easy to motivate yourself to keep pushing when the potential returns are months away at best. What is all too easy, however, is to neglect things that are not strictly necessary – blogging, for example, but I’m determined to write at least one new post each month, even if it’s only because it makes for a welcome respite from the more “serious” work.

One thing that can help a great deal in maintaining motivation is if you have something nice in the not-too-distant future to look forward to, and as it happens, I have quite a biggie: the paper I submitted in January got accepted to the IEEE Congress on Evolutionary Computation, which will be held in Wellington, New Zealand. It’s a bit of a strange event for me to attend; while I do find the field very interesting, my professional experience of it, not counting some courses I took years ago when I was a doctoral student in need of credits, is limited to having been a reviewer for CEC once. However, there is a special session there on the theme of “Ethics and Social Implications of Computational Intelligence”, and this is something I have done actual published work on. It’s also one of the themes I wanted to address in my current project, so that’s that box ticked I guess. Besides, visiting NZ has been on my bucket list for quite a while, so I could hardly pass up the opportunity.

So, a small fraction of my time this month has been spent at the very pleasant task of making travel plans. Wellington lies pretty much literally on the opposite side of the globe from Dublin, so even in this day and age travelling there is something of an operation. It’s not cheap, obviously, but that’s not really a problem, thanks to my rather generous MSCA fellowship budget. The main issue is time: the trip takes a minimum of 27 hours one way, and the “quick” option leaves you with precious little time to stretch your legs between flights. I didn’t exactly relish this idea, so I ended up choosing an itinerary that includes a 12-hour stopover in Sydney on the outbound journey. This should give me a chance to take a shower, reset my internal clock and yes, also go have a look at that funny-looking building where they do all the opera.

It would make little sense to go all that way just for a four-day conference, so after CEC I’m going to take some personal time and spend part of my summer holiday travelling around NZ (even though it will be actually winter there). I still want to spend a couple of weeks in Finland as well, so I have to be frugal with my leave days and efficient in how I use my limited time. Therefore I’m going to be mostly confined to the North Island, although I am planning to take a ferry across Cook Strait to Picton and back – the scenery of the Marlborough Sounds is supposed to be pretty epic. On the North Island I’m going to stop in Auckland and Rotorua before coming back to Wellington; between Auckland and Rotorua, the Hobbiton movie set is a must-see for a Tolkien reader and Lord of the Rings film fan such as myself.

As for the conference, I’m very much looking forward to the plenary talk by my countryman Prof. Risto Miikkulainen on “Creative AI through Evolutionary Computation”. The idea of machines being creative is philosophically challenging, which is part of why this talk interests me, but I’m also intrigued by the practical potential. The abstract mentions techy applications such as neural network architecture design, but personally, I’m particularly interested in artistic creativity – in fact, when I was doing those evolutionary computation courses at my alma mater, I toyed with the idea of a genetic algorithm that would serve as a songwriting aid by generating novel chord progressions. Apart from the plenaries, the conference programme is still TBA, but it’s always good to have a chance to meet and exchange views with people from different cultural and professional backgrounds, and since Wellington is apparently the undisputed craft beer capital of NZ, I’m expecting some very pleasant scholarly discussions over pints of the nation’s finest brews.

Dear Santa

Now that I’ve managed to clear away all of the stressful and/or boring stuff that was keeping me busy, time to do something fun: Christmas shopping! After the break my project is going to be almost halfway through, and although it will be a good while yet before I’m ready to start conducting user tests, it’s time to start getting serious about recruiting participants. After all, the tests are supposed to be about analysing the participants’ data, so they can’t just walk in at their convenience – I need them to spend some time collecting data first, and to do that, they’ll need something to collect the data with.

Our initial idea was to recruit people who are already using a sleep monitor of some kind, and I’m sure we’ll be able to find at least a few of those, but naturally we’ll have a bigger pool of candidates if we have a few devices available to loan to people who don’t have one of their own. Also, it’s obviously useful for me to play with these devices a bit so I can get a better idea of what sort of data they generate and what’s the best way to export it if I want to use it for my research (which I do). Besides, I’m hardly going to spend my entire expense budget on travel even if I go out of my way to pick the most remote conferences I can find to submit papers to.

So I didn’t need to worry too much about what I can afford – one of the many great things about the MSCA fellowship – but that doesn’t mean that the choice of what to buy was straightforward, because the range of consumer products capable of tracking sleep is, frankly, a little bewildering. Some devices you wear on your body, some you place in your bed and some at the bedside, and although I soon decided to narrow down my list of options by focusing on wearables, that still left me with more than enough variety to cope with. Some of these gadgets you wear on your wrist, while others go on your finger like a ring, and the wrist-worn ones range from basic fitness bracelets to high-end smartwatches that will probably make you your protein smoothie and launder your sports gear for you if you know how to use them.

One thing that made the decision quite a lot easier for me is that the manufacturers of fitness bracelets now helpfully include all of their sleep tracking functionality in models that are near the low end of the price spectrum, and since I’m only interested in sleep data, there was no need to ponder if I should go with the inexpensive ones or invest in bigger guns. Also, I had a preference for products that don’t make you jump through hoops if you want to export your data in a CSV file or similar, so I looked at the documentation for each of my candidates and if I couldn’t find a straight answer on how to do that, I moved on. In the end I settled on three different ones: the Fitbit Alta HR, the Withings Steel, and the Oura Ring.

What I particularly like about this trio is that each of these models represents a distinct style of design: the Fitbit is a modern bracelet-style gadget, whereas the Withings looks more like a classic analog wrist watch, and the Oura is, well, a ring. I can thus, to a certain extent, cater for my study participants’ individual stylistic preferences. For example, I’m rather partial toward analog watches myself, so I’d imagine that for someone like me the design of the Withings would have a lot of appeal.

Today’s my last day at work before the Christmas break, and things are wrapping up (no pun intended) very nicely. The orders for the sleep trackers went out last week, this morning I submitted the last of my (rather badly overdue) ethics deliverables to the European Commission, and just minutes ago I came back from my last performance with the DCU Campus Choir for this year. The only thing that may impinge on my rest and relaxation over the next couple of weeks is that there’s a conference deadline coming up immediately after my vacation and I’m quite eager to submit, but I shouldn’t need to worry about that until after New Year. Happy holidays, everyone!

Sleepytime

I recently obtained approval for my research from the DCU Research Ethics Committee, so I’m now officially good to go. This might seem like a rather late time to be getting the go-ahead, considering that I’ve been doing the research since February, but so far the work has been all about laying the foundations of the collaborative knowledge discovery software platform (for which I’m going to have to come up with a catchy name one of these days). This part of the project doesn’t involve any human participants or real-world personal data, so I’ve been able to proceed with it without having to concern myself with ethical issues.

As a matter of fact, if it were entirely up to me, the ethics application could have waited until even later, since it will be quite a while still before the platform is ready to be exposed to contact with reality. However, the Marie Curie fellowship came with T&Cs that call for ethics matters to be sorted out within a certain time frame, so that’s what I’ve had to roll with. I’d never actually had to put together an application like this before, so perhaps it was about time, and presumably it won’t hurt that some important decisions concerning what’s going to happen during the remainder of the project have now been made.

One of the big decisions I’d been putting off, but couldn’t anymore, was the nature of the scenario that I will use to demonstrate that the software platform is actually useful for the purpose for which it’s intended. This will be pretty much the last thing that happens in the project, and before that the software will have been tested in various other ways using, for example, open or synthetic data, but eventually it will be necessary to find some volunteers and have them try out the software so I can get some evidence on the workability of the software in a reasonable approximation of a real-world situation. It’s hardly the most controversial study ever, but it’s still research on human subjects and there will be processing of personal data involved, so things like research ethics and the GDPR come into play here and need to be duly addressed.

What I particularly needed a more precise idea about was the data that would be processed using the software platform. In the project proposal I said that this would be lifelogging data, but that can mean quite a few different things, so I needed to narrow it down to something specific. Of course it wouldn’t make sense to develop a platform for analysing just one specific kind of data, so as far as the design and implementation of the software is concerned, I have to pretend that the data could be anything. However, the only way I can realistically expect to be able to carry out a meaningful user test where the users actually bring their own data is by controlling the type of data they can bring.

There were a few criteria guiding the choice of the type of data to focus on. For one thing, the data had to be something that I knew to be already available at some sources accessible to me, so that I could run some experiments on my own before inflicting the software on others. Another consideration was the availability of in-house expertise at the Insight Centre: I’ve never done any serious data mining myself, having always looked at things from more of a software engineering perspective, so it was important that there would be someone close by who knows about the sort of data I intend to process and can help me ensure that the platform I’m building has the right tools for the job.

When I discussed this issue with my supervisor, he suggested sleep data – I’m guessing not least because it’s a personal interest of his, but it does certainly satisfy the above two criteria. Furthermore, it also satisfies a third one, which is no less important: there are many different devices in the market that are capable of tracking your sleep, and these are popular enough that it shouldn’t be a hopeless task to find a decent number of users to participate in testing the software. The concept of lifelogging if often associated with wearable cameras such as the Microsoft SenseCam, but these are much more of a niche product, making photographic data a not very attractive option – which it in fact was anyway because of the privacy implications of various things that may be captured in said photographs, so we kind of killed two birds with one stone there.

Capturing and analysing sleep data is something of a hot topic right now, so in terms of getting visibility for my research, I guess it won’t hurt to hop on the bandwagon, even though I’m not aiming to develop any new analysis techniques as such. Interestingly, the current technology leader in wearable sleep trackers hails from Oulu, Finland, the city where I lived and worked before joining Insight and moving to Dublin. There’s been quite a lot of media buzz around this gadget recently, from Prince Harry having been spotted wearing one on his Australian tour to Michael Dell announcing he’s decided to invest in the company that makes them. I haven’t personally contributed to the R&D behind the product in any way, but I feel a certain amount of hometown pride all the same – Nokia phones may have crashed and burned, but Oulu has bounced back and is probably a lot better off in the long run, not depending so heavily on a single employer anymore.