Collaboration, schmollaboration

Whenever someone asks me what my research project is about, I usually open by saying we’re calling it collaborative knowledge discovery from data. That’s a nice, convenient way of putting it in a nutshell, but it immediately calls for some elaboration, especially on the meaning of the term “collaborative”. Technically, any activity that involves two or more people working together toward a common goal is collaborative, but this definition doesn’t get us very far, because in knowledge discovery you typically have at least someone who knows about the technology and someone who knows about the application domain. It’s not unheard of for one person to know about both, but still, I think it’s safe to say that collaboration is the rule rather than the exception here.

To narrow it down a bit, the kind of collaboration we’re talking about is remote and synchronous. In other words, the participants are not located in the same place, but they can all simultaneously edit whatever it is they’re collaborating on and see the effects of each other’s edits in real time. This implies that there must be some kind of online environment where the collaboration takes place; think something like Google Docs or MS Office Online, only for KDD artifacts such as datasets, algorithms and processing pipelines.

Even this is not a particularly novel idea in itself, as there are collaboration platforms already available where you can develop just these sorts of things. Therefore in KDD-CHASER we’re upping the ante even further by focusing specifically on collaborative knowledge discovery from personal data, driven by the data owner who cannot be assumed to have any particular technology or domain expertise. It’s a bit of a niche, which of course makes our eventual results somewhat less generalisable, but it also makes it considerably easier to spot opportunities for novel research contributions.

To me, the most interesting problems here are not necessarily associated with knowledge discovery as such but with the things that need to happen before knowledge discovery can take place. After all, from the data owner’s perspective the point of collaborating with experts is basically to have the actual discovery done by people who are better equipped for it in terms of skills and tools. This doesn’t mean, however, that the data owner’s role in the collaboration is limited to being a passive data source; on the contrary, it is the data owner’s needs that drive the entire process of collaborative KDD in the special case we’re considering.

The first problem that a data owner may encounter on the way to a successful collaboration is that they don’t even know anyone they could collaborate with, so the first thing the collaboration platform should do is provide a matchmaking service that brings together people who have data with people who have the right sort of expertise to help turn it into something more valuable. After the matchmaking follows the really interesting part: negotiation. What kind of knowledge is the data owner interested in? What is actually achievable, given the available data and the extent to which the data owner is willing to share it with the expert? What is the expert expecting to get in compensation for their efforts? The collaborators need to find the answers to such questions among themselves, and the collaboration platform should support them in this.

The bare minimum is to provide the collaborators with some kind of communication channel, but this is something that would be required anyway, and it’s hardly a research problem from a computing standpoint. However, there’s a lot more to negotiation than just talking, and I’m interested to see what I might do to help things along in this area. Privacy, for example, is traditionally close to my heart and something that I want to address also here, because one of the things to be determined through negotiation is how much of their data the data owner is prepared to trust their collaborators with, considering that the latter may be the KDD equivalent of someone they just matched with on Tinder.

It’s been pretty clear from the start that whatever we manage to accomplish in my current project, it’s not going to be a comprehensive solution to all the problems of collaborative KDD, even within the niche we’ve carved for ourselves. What we can realistically shoot for, though, is a model that shows us what the collaboration process looks like and gives us an understanding of where the major problems are. The software I’m building will basically be a collection of candidate solutions to a select few of these problems, and it will hopefully be something I can continue to build on when my MSCA fellowship is over.

Far side of the world

Things are getting quite busy again, as the project has come to a stage where I need to be producing some publications on early results while also doing implementation work to get more solid results, not to mention thinking seriously about where my next slice of funding is going to come from. Any one of these could consume all of my available time if I allowed it to, and it’s not always easy to motivate yourself to keep pushing when the potential returns are months away at best. What is all too easy, however, is to neglect things that are not strictly necessary – blogging, for example, but I’m determined to write at least one new post each month, even if it’s only because it makes for a welcome respite from the more “serious” work.

One thing that can help a great deal in maintaining motivation is if you have something nice in the not-too-distant future to look forward to, and as it happens, I have quite a biggie: the paper I submitted in January got accepted to the IEEE Congress on Evolutionary Computation, which will be held in Wellington, New Zealand. It’s a bit of a strange event for me to attend; while I do find the field very interesting, my professional experience of it, not counting some courses I took years ago when I was a doctoral student in need of credits, is limited to having been a reviewer for CEC once. However, there is a special session there on the theme of “Ethics and Social Implications of Computational Intelligence”, and this is something I have done actual published work on. It’s also one of the themes I wanted to address in my current project, so that’s that box ticked I guess. Besides, visiting NZ has been on my bucket list for quite a while, so I could hardly pass up the opportunity.

So, a small fraction of my time this month has been spent at the very pleasant task of making travel plans. Wellington lies pretty much literally on the opposite side of the globe from Dublin, so even in this day and age travelling there is something of an operation. It’s not cheap, obviously, but that’s not really a problem, thanks to my rather generous MSCA fellowship budget. The main issue is time: the trip takes a minimum of 27 hours one way, and the “quick” option leaves you with precious little time to stretch your legs between flights. I didn’t exactly relish this idea, so I ended up choosing an itinerary that includes a 12-hour stopover in Sydney on the outbound journey. This should give me a chance to take a shower, reset my internal clock and yes, also go have a look at that funny-looking building where they do all the opera.

It would make little sense to go all that way just for a four-day conference, so after CEC I’m going to take some personal time and spend part of my summer holiday travelling around NZ (even though it will be actually winter there). I still want to spend a couple of weeks in Finland as well, so I have to be frugal with my leave days and efficient in how I use my limited time. Therefore I’m going to be mostly confined to the North Island, although I am planning to take a ferry across Cook Strait to Picton and back – the scenery of the Marlborough Sounds is supposed to be pretty epic. On the North Island I’m going to stop in Auckland and Rotorua before coming back to Wellington; between Auckland and Rotorua, the Hobbiton movie set is a must-see for a Tolkien reader and Lord of the Rings film fan such as myself.

As for the conference, I’m very much looking forward to the plenary talk by my countryman Prof. Risto Miikkulainen on “Creative AI through Evolutionary Computation”. The idea of machines being creative is philosophically challenging, which is part of why this talk interests me, but I’m also intrigued by the practical potential. The abstract mentions techy applications such as neural network architecture design, but personally, I’m particularly interested in artistic creativity – in fact, when I was doing those evolutionary computation courses at my alma mater, I toyed with the idea of a genetic algorithm that would serve as a songwriting aid by generating novel chord progressions. Apart from the plenaries, the conference programme is still TBA, but it’s always good to have a chance to meet and exchange views with people from different cultural and professional backgrounds, and since Wellington is apparently the undisputed craft beer capital of NZ, I’m expecting some very pleasant scholarly discussions over pints of the nation’s finest brews.

Getting fit, bit by bit

I’ve been making decent progress on my software, and while it’s no good yet for any kind of data analysis, it can already be used to do a number of things related to the management of datasets and collaborations. I may even unleash the current incarnation upon some unsuspecting human beings soon, but for now, I’m using myself as my first guinea pig, so I’ve started wearing one of the Fitbits I bought myself (or rather, for my project) for Christmas. From the perspective of my research, the reason for this is that I need to capture some sample data so I can get a look at what the data looks like when it’s exported from the Fitbit cloud into a file, but I’m also personally interested in seeing firsthand what’s happened in fitness trackers since the last time I wore one, which was quite a few years ago and then also for research purposes.

Back then I wasn’t hugely impressed, but it seems that by now these gadgets have advanced enough in terms of both functionality and appearance that I would consider buying one of my own. My initial impression of the Fitbit was that it’s quite sleek but not very comfortable; no matter how I wore it, it always felt either too loose or too tight. However, it seems that I either found the sweet spot or simply grew accustomed to it because it doesn’t bother me that much anymore, although most of the time I am still aware that it’s there. I’m probably not wearing it exactly as recommended by the user manual, but I can’t be bothered to be finicky about it.

By tapping on the screen of the device I can scroll through my basic stats: steps, heart rate, distance, energy expenditure and active minutes. More information is available by launching the Fitbit app; this is where I see, for example, how much sleep the device thinks I’ve had. Here I could also log my weight and what I’ve eaten if I were so inclined. Setting up the device and the app so that they can talk to each other takes a bit of time, but after that the device syncs to the app without any problems, at least on Windows. However, for some reason the app refused to acknowledge that I’m wearing the Fitbit on my right wrist rather than my left; this setting I had to make on the website to make it stick. The website is also where I export my data, which is quick and straightforward to do, with a choice between CSV or Excel for the data format.

The accuracy of the data is not really my number one concern, since I’m interested in the process of collaborative data analysis rather than the results of the analysis. However, on a personal note again, it is interesting to make observations on how the feedback I get from the device and the app relates to how I experience those aspects of my life that the feedback is about. For example, I can’t quite escape the impression that the Fitbit is flattering me, considering how consistently I’ve been getting my daily hour or more of activity even though in my own opinion I certainly don’t exercise every day. On the other hand, I do get a fair bit of walking done on a normal working day, including a brisk afternoon walk in the park next to the university campus whenever I can spare the time, so I guess it all adds up to something over the course of the day.

Based on my fairly brief experience, I can already see a few reasons for the rising popularity of wearables such as the Fitbit. Even if the accuracy of the data in terms of absolute values leaves something to hope for, presumably the device is at least reasonably consistent with itself over time, so if there are any rising or falling trends in your performance, they should be visible in the data. To make the product more friendly and fun to use, the developers have used a host of persuasion and gamification techniques; for example, there are various badges to be earned, with quirky names like “Penguin March”, and occasionally the device gets chatty with me, offering suggestions such as “take me for a walk?”. When I reach the daily magic number of ten thousand steps, the Fitbit vibrates a little silent congratulatory fanfare on my wrist.

In terms of what I need to carry out my project, the Fitbit will definitely serve: setting it up, syncing it and exporting the data all seem to work without any big hassle. As for whether I’m going to get one for myself, I would say that it’s now more likely than before that I will get some kind of wearable – not necessarily a Fitbit, but one that will give me the same kind of information anyway. Having this opportunity to try out a few different ones is an unexpected perk of the project that I now suddenly welcome, even though I wasn’t particularly interested in these devices when I was applying for the grant.

Dear Santa

Now that I’ve managed to clear away all of the stressful and/or boring stuff that was keeping me busy, time to do something fun: Christmas shopping! After the break my project is going to be almost halfway through, and although it will be a good while yet before I’m ready to start conducting user tests, it’s time to start getting serious about recruiting participants. After all, the tests are supposed to be about analysing the participants’ data, so they can’t just walk in at their convenience – I need them to spend some time collecting data first, and to do that, they’ll need something to collect the data with.

Our initial idea was to recruit people who are already using a sleep monitor of some kind, and I’m sure we’ll be able to find at least a few of those, but naturally we’ll have a bigger pool of candidates if we have a few devices available to loan to people who don’t have one of their own. Also, it’s obviously useful for me to play with these devices a bit so I can get a better idea of what sort of data they generate and what’s the best way to export it if I want to use it for my research (which I do). Besides, I’m hardly going to spend my entire expense budget on travel even if I go out of my way to pick the most remote conferences I can find to submit papers to.

So I didn’t need to worry too much about what I can afford – one of the many great things about the MSCA fellowship – but that doesn’t mean that the choice of what to buy was straightforward, because the range of consumer products capable of tracking sleep is, frankly, a little bewildering. Some devices you wear on your body, some you place in your bed and some at the bedside, and although I soon decided to narrow down my list of options by focusing on wearables, that still left me with more than enough variety to cope with. Some of these gadgets you wear on your wrist, while others go on your finger like a ring, and the wrist-worn ones range from basic fitness bracelets to high-end smartwatches that will probably make you your protein smoothie and launder your sports gear for you if you know how to use them.

One thing that made the decision quite a lot easier for me is that the manufacturers of fitness bracelets now helpfully include all of their sleep tracking functionality in models that are near the low end of the price spectrum, and since I’m only interested in sleep data, there was no need to ponder if I should go with the inexpensive ones or invest in bigger guns. Also, I had a preference for products that don’t make you jump through hoops if you want to export your data in a CSV file or similar, so I looked at the documentation for each of my candidates and if I couldn’t find a straight answer on how to do that, I moved on. In the end I settled on three different ones: the Fitbit Alta HR, the Withings Steel, and the Oura Ring.

What I particularly like about this trio is that each of these models represents a distinct style of design: the Fitbit is a modern bracelet-style gadget, whereas the Withings looks more like a classic analog wrist watch, and the Oura is, well, a ring. I can thus, to a certain extent, cater for my study participants’ individual stylistic preferences. For example, I’m rather partial toward analog watches myself, so I’d imagine that for someone like me the design of the Withings would have a lot of appeal.

Today’s my last day at work before the Christmas break, and things are wrapping up (no pun intended) very nicely. The orders for the sleep trackers went out last week, this morning I submitted the last of my (rather badly overdue) ethics deliverables to the European Commission, and just minutes ago I came back from my last performance with the DCU Campus Choir for this year. The only thing that may impinge on my rest and relaxation over the next couple of weeks is that there’s a conference deadline coming up immediately after my vacation and I’m quite eager to submit, but I shouldn’t need to worry about that until after New Year. Happy holidays, everyone!

Busy times

With the end-of-year holidays approaching, things tend to get busy in a lot of places, not just in Santa’s workshop. My life in Ireland is no exception: there are five major work-related (or at least university-related) things that I’ve been trying my best to juggle through November, with varying success. Many of these will culminate over the next two weeks or so, so after that I’m hoping it will be comparatively smooth sailing till I leave for my well-deserved Christmas break in Finland. The blog I’m not even counting among the five and I’ve been pretty much neglecting it, so this post is rather overdue, and also a welcome break from all of the more pressing stuff that I should really be working on right now.

One area where I’ve had my hands full is data protection, where it seems that whenever a document is finished, there’s always another one to be prepared and submitted for evaluation. Getting a green light from the Research Ethics Committee was a big step forward, but there’s now one more hurdle left to overcome in the form of a Data Protection Impact Assessment. I’m very much learning (and making up) all of this as I go along, and the learning curve has proved a rather more slippery climb than I expected, but I’m getting there. In fact, I’m apparently one of the first to go through this process around here, so I guess I’m not the only one trying to learn how it works. I hope this means that things will be easier for those who come after me.

Meanwhile, I’ve been preparing to give my very first lecture here at DCU – thankfully, just one guest lecture and not a whole course, but even that is quite enough to rack my nerves. It is a little strange that this should be the case, even after all the public speaking I’ve had to do during my fifteen-plus years in research, but the fact of the matter is that it does still feel like a bit of an ordeal every time. Of course it doesn’t help that I’m in a new environment now, and also I’ll be speaking to undergraduate students, which is rather different from giving a presentation at a conference to other researchers. Still, I’m not entirely unfamiliar with this type of audience, and I can recycle some of the lecture materials I created and used in Oulu, so I think I’m going to be all right.

Speaking of conferences, I’m serving in the programme committee of the International Conference on Health Informatics for the second year running and the manuscript reviewing period is currently ongoing, so that’s another thing that’s claimed a sizable chunk of my time recently. Somewhere among all of this I’m somehow managing to fit in a bit of actual research as well, although it’s nowhere near as much as I’d like, but I guess we’ve all been there. The software platform is taking shape towards a minimum viable product of sorts, and I have a couple of ideas for papers I want to write in the near future, so there’s a clear sense of moving forward despite all the other stuff going on.

So what’s the fifth thing, you ask? Well, I’ve rekindled my relationship with choral singing by joining the DCU Campus Choir, having not sung in a proper choir since school. Despite the 20-year gap (plus a bit), I haven’t had much trouble getting into it again: I can still read music, I can still hit the bass notes, and I don’t have all that much to occupy myself in the evenings and weekends so I have plenty of time to learn my parts (although I’m not sure how happy my neighbours are about it). The material we’re doing is nice and varied, and the level of ambition is certainly sufficient, as it seems like we’re constantly running out of rehearsal time before one performance or other. Our next concert will be Carols by Candlelight at DCU’s All Hallows campus on the evening of Monday the 10th of December, so anyone reading this who’s in town that day is very warmly welcome to listen!

Sleepytime

I recently obtained approval for my research from the DCU Research Ethics Committee, so I’m now officially good to go. This might seem like a rather late time to be getting the go-ahead, considering that I’ve been doing the research since February, but so far the work has been all about laying the foundations of the collaborative knowledge discovery software platform (for which I’m going to have to come up with a catchy name one of these days). This part of the project doesn’t involve any human participants or real-world personal data, so I’ve been able to proceed with it without having to concern myself with ethical issues.

As a matter of fact, if it were entirely up to me, the ethics application could have waited until even later, since it will be quite a while still before the platform is ready to be exposed to contact with reality. However, the Marie Curie fellowship came with T&Cs that call for ethics matters to be sorted out within a certain time frame, so that’s what I’ve had to roll with. I’d never actually had to put together an application like this before, so perhaps it was about time, and presumably it won’t hurt that some important decisions concerning what’s going to happen during the remainder of the project have now been made.

One of the big decisions I’d been putting off, but couldn’t anymore, was the nature of the scenario that I will use to demonstrate that the software platform is actually useful for the purpose for which it’s intended. This will be pretty much the last thing that happens in the project, and before that the software will have been tested in various other ways using, for example, open or synthetic data, but eventually it will be necessary to find some volunteers and have them try out the software so I can get some evidence on the workability of the software in a reasonable approximation of a real-world situation. It’s hardly the most controversial study ever, but it’s still research on human subjects and there will be processing of personal data involved, so things like research ethics and the GDPR come into play here and need to be duly addressed.

What I particularly needed a more precise idea about was the data that would be processed using the software platform. In the project proposal I said that this would be lifelogging data, but that can mean quite a few different things, so I needed to narrow it down to something specific. Of course it wouldn’t make sense to develop a platform for analysing just one specific kind of data, so as far as the design and implementation of the software is concerned, I have to pretend that the data could be anything. However, the only way I can realistically expect to be able to carry out a meaningful user test where the users actually bring their own data is by controlling the type of data they can bring.

There were a few criteria guiding the choice of the type of data to focus on. For one thing, the data had to be something that I knew to be already available at some sources accessible to me, so that I could run some experiments on my own before inflicting the software on others. Another consideration was the availability of in-house expertise at the Insight Centre: I’ve never done any serious data mining myself, having always looked at things from more of a software engineering perspective, so it was important that there would be someone close by who knows about the sort of data I intend to process and can help me ensure that the platform I’m building has the right tools for the job.

When I discussed this issue with my supervisor, he suggested sleep data – I’m guessing not least because it’s a personal interest of his, but it does certainly satisfy the above two criteria. Furthermore, it also satisfies a third one, which is no less important: there are many different devices in the market that are capable of tracking your sleep, and these are popular enough that it shouldn’t be a hopeless task to find a decent number of users to participate in testing the software. The concept of lifelogging if often associated with wearable cameras such as the Microsoft SenseCam, but these are much more of a niche product, making photographic data a not very attractive option – which it in fact was anyway because of the privacy implications of various things that may be captured in said photographs, so we kind of killed two birds with one stone there.

Capturing and analysing sleep data is something of a hot topic right now, so in terms of getting visibility for my research, I guess it won’t hurt to hop on the bandwagon, even though I’m not aiming to develop any new analysis techniques as such. Interestingly, the current technology leader in wearable sleep trackers hails from Oulu, Finland, the city where I lived and worked before joining Insight and moving to Dublin. There’s been quite a lot of media buzz around this gadget recently, from Prince Harry having been spotted wearing one on his Australian tour to Michael Dell announcing he’s decided to invest in the company that makes them. I haven’t personally contributed to the R&D behind the product in any way, but I feel a certain amount of hometown pride all the same – Nokia phones may have crashed and burned, but Oulu has bounced back and is probably a lot better off in the long run, not depending so heavily on a single employer anymore.

First blood

Time to look at the first results from my project! Well, not quite – the first results are in a literature survey I did immediately after starting the project and made into a journal manuscript. I’m currently waiting for the first round of reviews to come in, but in the meantime I’ve been busy developing my ideas about collaborative knowledge discovery into something a bit more concrete. In particular, I’ve been thinking about one of the potential obstacles to effective collaboration from the data owner’s perspective: privacy.

In the aftermath of the much publicised Facebook–Cambridge Analytica scandal, one would at least hope that people are becoming more wary about sharing their personal data online. On the other hand, with the General Data Protection Regulation in full effect since 25 May, a huge number of people are now covered by a piece of legislation that grants them an extensive set of personal data control rights and has the power to hurt even really big players (like Facebook) if they don’t respect those rights. Of course, it’s still up to the people to actually exercise their rights, which may or may not happen, but after all the GDPR news, emails and “we use cookies” notices on websites, they should be at least vaguely aware that they have them.

The increased awareness of threats to privacy online and the assertion of individuals, rather than corporations, as the owners of their personal data are welcome developments, and I like to think that what I’m trying to accomplish is well aligned with these themes. After all, the collaborative knowledge discovery platform I’m building is intended to empower individual data owners: to help them extract knowledge from their own data for their own benefit. This does not make the privacy issue a trivial one, however – in fact, I wouldn’t be surprised if it turned out that people are more uneasy about sharing a small portion their data with an individual analyst focusing on their case specifically than about using an online service that grabs and mines all the data it can but does so in a completely impersonal manner. The platform will need to address this issue somehow lest it end up defeating its own purpose.

The angle from which I decided to approach the problem involves using a domain ontology and a semantic reasoner, which are technologies that I had been interested in for quite some time but hadn’t really done anything with. As I was doing the literature survey, I became increasingly convinced that an underlying ontology would be one of the key building blocks of the new platform, but it was also clear to me that I would need to start by modelling some individual aspect of collaboration as a proof of concept, so that I would fail fast if it came to that. If I started working top-down to produce a comprehensive representation of the entire domain, in the worst case I might take ages to discover nothing but that it wasn’t a very viable approach after all.

All this came together somewhat serendipitously when I found out that the 2nd International Workshop on Personal Analytics and Privacy (PAP 2018), held in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2018) in Dublin, had an open call for papers. The submission deadline was coming up in a month – enough time to put together some tentative results, though nothing hugely impressive – and coincided rather nicely with the date when I was planning to fly to Finland for my summer holidays. In about two weeks I had the first version of the manuscript ready, with another two left over for revisions.

The ontology I designed is based on the idea of a data owner and a data analyst (or possibly any number of either) using the collaborative knowledge discovery platform to negotiate the terms of their collaboration. Each uses the platform to specify requirements, but from opposing perspectives: the data analyst specifies analysis tasks, which require certain data items as input, while the data owner specifies privacy constraints, which prevent certain data items from being released to the data analyst. The data owners, data analysts, data items, analysis tasks and privacy constraints are all registered as individuals in the ontology and linked with one another such that a reasoner is able to use this information to detect conflicts, that is, situations where a data item is required for a data analysis task but not released by the data owner.

To resolve such conflicts, the data owner and the data analyst may, for example, agree that the analyst receives a version of the dataset from which the most sensitive information has been removed. Removing information reduces the utility of the data, but does not necessarily make it completely useless; finding a balance where the data owner’s privacy preferences are satisfied while the data analyst still gets enough material to work with is the essence of the negotiation process. The ontology is meant to support this process by not just pointing out conflicts, but by suggesting possible resolutions based on recorded knowledge about the utility effects of different methods of transforming data to make it less sensitive.

For the PAP workshop paper, I only had time to design the logic of conflict detection in any detail, and there also was no time to test the ontology in a real-world scenario or even a plausible approximation of one. It therefore hardly seems unfair that although the paper was accepted for a short oral presentation at the workshop, it was not accepted for inclusion in the post-proceedings. Obviously it would have been nicer to get a proper publication out of it, but I decided to go ahead and give the presentation anyway – ECML-PKDD is the sort of conference I might have gone to even if I didn’t have anything to present, and since the venue is a 25-minute walk away from my house, the only cost was the registration fee, which I could easily afford from the rather generous allowance for sundry expenses that came with the MSCA fellowship.

Croke Park may seem like an unlikely place to have a conference, but it is in fact a conference centre as well as a stadium, and seems to work perfectly well as a venue for an academic event – meeting spaces, catering and all. Besides Croke Park, we had Mansion House for the welcome reception and Taylor’s Three Rock for the conference banquet, so can’t complain about the locations. The regular programme was quite heavy on algorithms, which isn’t really my number one area of interest, but I did manage to catch some interesting application-oriented papers and software demos. What I enjoyed the most, however, were the keynote talks by Corinna Cortes, Misha Bilenko and Aris Gionis; there were two others that I’m sure I also would have found very interesting but was unable to attend, because there was a rather important deadline coming up and so I had to zig-zag between Croke Park and DCU to make sure I got everything finished on time.

My own talk went reasonably well I felt, with an audience of about twenty and some useful discussion afterwards on how I might go about modelling and quantifying the concept of utility reduction. On the last day of the conference, which was today, I went to another workshop, the 3rd Workshop on Data Science for Social Good (SoGood 2018), with presentations on how machine learning and data mining techniques can be used to address societal issues such as homelessness and corruption. I especially enjoyed the last one, if enjoy is the right word – it dealt with efforts to combat human trafficking by means of data science, certainly a worthy cause if ever there was one, but also rife with difficulties from the scarcity of good input data to the nigh-impossibility of devising an ethically justifiable experiment when there are literally lives at stake. Plenty of food for thought there, and a fine way to finish off this week of conference activities; on Monday it’s back to business as usual.

Getting started

Welcome to You Know Nothing, Socrates! The theme of this blog is knowledge, or more specifically – because that sure could use some narrowing down – the intersection of knowledge (in the philosophical sense) and computing. Knowledge, of course, is a notoriously elusive concept once you start trying to pin it down, which is why I’ve decided to name the blog after the famous Socratic paradox, apocryphal though it may be. And before you ask: yes, the title is also a Game of Thrones reference. Get over it.

To make matters worse, we haven’t been content to just assert that we as human beings have the ability to know various things and to derive new knowledge from evidence. Instead, ever since the invention of the modern digital computer, we’ve been very keen on the idea of replicating, or at least imitating, that ability in machines. This pursuit has given rise to fields of computer science research such as knowledge representation and knowledge discovery; this is the area where I’ve been working throughout my career as a researcher, and also the main subject area that I’ll be writing about.

A bit of context: I’m currently working as a Marie Curie Individual Fellow at the Insight Centre for Data Analytics in Dublin, Ireland. The project I’m working on, titled KDD-CHASER, deals with remote collaboration for the extraction of useful knowledge from personal data, such as one might collect using a wearable wellness device designed to generate meaningful metrics on the wearer’s physical activity and sleep. These products are quite popular and, presumably, useful, but for most users their utility is limited to whatever analyses the product has been programmed to give them. The research I’m doing aims for the creation of an online platform that could be used by users of personal data capturing devices to discover additional knowledge in their data with the help of expert collaborators.

As long as the KDD-CHASER project is running, which is until the end of January 2020, I will be using this blog as a communication channel (among others) to share information about its progress and results with the public. However, I’m also planning to post more general musings on topics that are related to, but not immediately connected with, the work I’m doing in the project. These, I hope, will be enough to keep the blog alive after the project is done and I move on to other things. Not that I’m expecting those other things to be radically different from the things I’m involved in at the moment, but hey, you never know.

There certainly isn’t a shortage of subject matter to draw on: besides the under-the-hood mechanics of computers capable of possessing and producing knowledge, there’s the philosophical dimension of them that I’m also deeply interested in – another reason for my choice of blog title. From here it’s not much of a conceptual leap to the even more bewildering philosophical questions surrounding the notion of artificial intelligence, so I might take the occasional stab at those as well. I fully expect to come to the conclusion that I really know absolutely nothing, but whether I’ll be any the wiser for it remains to be seen.