Philosophical ruminations vol. 1

Holidays are over and I’m back from the Finnish winter wonderland in Ireland, who seems to retain an appreciable fraction of her famous forty shades of green even in the middle of winter. No sign of the pre-Christmas frenzy anymore – I’ve been working at a fairly leisurely pace for these past few weeks, enjoying the luxury of being able to take the time to have a good think about what I’m doing before I actually do it. The only deadline of immediate concern was the extended deadline of the conference for which I was preparing a paper before the holidays, and since I didn’t dare rely on there being an extension, I all but finished the manuscript during the break, so there wasn’t much left for me to do to it after I got back on the clock.

Since things are not so hectic now, I thought this would be a good time for a post discussing a topic that’s not directly concerned with what’s going on in my project at the moment. When I started the blog, my intention was that one of the themes I would cover would be the philosophical dimension of knowledge discovery, and there’s a certain concept related to this that’s been on my mind quite a lot lately. The concept is known as epistemic opacity; that’s epistemic as in epistemology – the philosophical study of knowledge – and opacity as in, well, the state of not being transparent (thanks, Captain Obvious).

I ran into this concept in a paper by Paul Humphreys titled “The philosophical novelty of computer simulation methods”, published in the philosophy journal Synthese in 2009. Humphreys puts forward the argument that there are certain aspects of computer simulations and computational science that make them philosophically novel as methods of scientific enquiry, and one of these aspects is their epistemic opacity, which he defines as follows:

[…] a process is epistemically opaque relative to a cognitive agent X at time t just in case X does not know at t all of the epistemically relevant elements of the process. A process is essentially epistemically opaque to X if and only if it is impossible, given the nature of X, for X to know all of the epistemically relevant elements of the process.

That’s a bit of a mouthful, but the gist of it – as far as I understand it – is that computer simulations are opaque in the sense that there is no way for a human observer to fully understand why a given simulation behaves the way it does. This makes it impossible to verify the outcome of the simulation using means that are independent of the simulation itself; a parallel may be drawn here with mathematics, where there has been criticism of computer-generated proofs that are considered non-surveyable, meaning that they cannot be verified by a human mathematician without computational assistance.

The philosophical challenge here arises from the fact that since we have no means to double-check what the computer is telling us, we are effectively outsourcing some of our thinking to the computer. To be fair, we have been doing this for quite some time now and it seems to have worked out all right for us, but in the history of science this is a relatively new development, so I think the epistemologists can be excused for still having some suspicions. I doubt that anyone is suggesting we should go back to relying entirely on our brains (it’s not like those are infallible either), but I find that in any activity, it’s sometimes instructive to take a step back and question the things you’re taking for granted.

The algorithms used in knowledge discovery from data can also be said to be epistemically opaque, in the sense that while they quite often yield a model that works, it’s a whole different matter to understand why it works and why it makes the mistakes that it does. And they do make mistakes, even the best of them; there’s no such thing as a model that’s 100% accurate 100% of the time, unless the problem it’s supposed to solve is a very trivial one. Of course, in many cases such accuracy is not necessary for a model to be useful in practice, but there is something about this that the epistemologist in me finds unsatisfying – it feels like we’re giving up on the endeavour to figure out the underlying causal relationships in the real world and substituting the more pedestrian goal of being able to guess a reasonably accurate answer with adequate frequency, based on what is statistically likely to be correct given loads and loads of past examples.

From a more practical point of view, the opacity of KDD algorithms and the uncertainty concerning the accuracy of their outputs may or may not be a problem, since some users are in a better position to deal with these issues than others. Traditionally, KDD has been a tool for experts who are well aware of its limitations and potential pitfalls, but it is now increasingly being packaged together with miniaturised sensors and other electronics to make a variety of consumer products, such as the wearable wellness devices I’m working with. The users of these products are seldom knowledge discovery experts, and even for those who are, there is little information available to help them judge whether or not to trust what the device is telling them. The net effect is to make the underlying algorithms even more opaque than they would normally be.

Now, I presume that by and large, people are aware that these gadgets are not magic and that a certain degree of skepticism concerning their outputs is therefore warranted, but it would be helpful if we could get some kind of indication of when it would be particularly good to be skeptical. I suspect that often it’s the case that this information exists, but we don’t get to see it basically because it would clutter the display with things that are not strictly necessary. Moreover, this information is lost forever when the outputs are exported, which may be an issue if they are to be used, for instance, as research data, in which case it would be rather important to know how reliable they are. I’d be quite interested in seeing a product that successfully combines access to this sort of information with the usability virtues of today’s user-friendly wearables.