The combination of a number of amazing advances in technology (resulting in faster computer processor speeds, increased density and capability of computer storage, and higher bandwidth network connections) has opened up a new world for big data – the application of smart algorithms to large quantity and/or high velocity data.

Big data techniques enable us to infer patterns and preferences, and to identify opportunities and risks for our businesses, our projects and our other empire-building programmes. They allow us to bring “artificial-intelligence-lite” to the person-in-the-street, or perhaps, more correctly, bring the person-in-the-street into our large statistical analysis models for determining and delivering marketing and business opportunities.

The benefits are not necessarily one-sided towards the organisations commissioning the data analysis. There is potential for significant win-wins here.  Let’s suppose that in exchange for knowing where you are, I help guide you to your next destination through your smart phone map application and I point out places of interest to you on the way. Places of interest on the way pay me to promote their services to you. You don’t get lost, I get paid, and everybody is happy. Or maybe I provide you with the ability to track the delivery of your next take away in real time, using the same map application. Best of all, my data-enabled services or products don’t cost me too much to develop and deploy, in comparison with the revenue they generate.

But it’s not all about business success and profitability. There have been some great wins for society from altruistic work with big data. Big data techniques applied to neonatal death have resulted in a breakthrough for treating infections in premature babies. By studying the medical observations from thousands of premature babies, researchers have been able to identify that the temperature of babies who died had spiked 24 hours previously, but then returned to within the normal range before shooting up again. By providing antibiotics at the point where the temperature first spikes, survival rates of tiny babies can be increased significantly.

But what if these big data techniques fall into the wrong hands and instead of being used to drive new business lines or save lives, they are put to more nefarious purposes?

When I wake up each morning, I check international, national, local and family news from various news and social media applications. I leave a trail of digital footprints as I pass through sites, digitally kicking over information collection points on the way. Now if you were to purchase this information, and combine it, say, with weather data or data from my supermarket loyalty card, you could indirectly remind me that I need to buy some waterproof hiking gear and long life food supplies, by sending through some very specific adverts. I might find this a bit eerie, but it could also be very useful. Or you could run a series of psychoanalysis algorithms across the data, check out my friends, and work out how I am likely to vote in the next election.

You could hire a professional agnotologist to bombard me with a series of messages to try and redirect me, confuse me, disillusion me, or just generally discourage me from voting. Now that seems downright wrong from an ethical and moral perspective, but it could well turn out that my moral and ethical code on these matters is different from your moral and ethical code, and who is to say which of us is correct?

It’s the same data, used for different purposes, and with different value propositions for the person-in-the-street and the organisation commissioning the data analysis. Big data techniques are neutral, but how we apply these techniques and their underpinning technology determines whether we are a force for good or a force for bad. It is the human element that determines whether the techniques are applied in a way that benefits society.

And there’s another human element that we should consider here. How good and how smart are our smart algorithms? We all know that average behaviour can confuse us or detract us from focussing on the outliers who we also might want to target. On average we all have less than two legs, but a marketing campaign aimed at the average would miss most of the population.

We also see issues with the linkage of information and the potential to jump to incorrect conclusions – take for example the predictive polling results for the most recent elections in the UK and the USA. Smart algorithms are as smart as the humans developing them, and if software developers could be paid to develop perfect software, we wouldn’t have any prominent IT-enabled billion dollar projects across the world failing or abandoned. Again, it is the human element that determines our success or failure.

As a race, we often get very excited about new technology, and game changing opportunities presented by state of the art technology platforms and new innovative techniques for the development or delivery of services. As businesses, governments and organisations wanting to make a difference in the world and for our customers, citizens and users, we should adopt and adapt these new technologies. However, let’s proceed with caution.

There is much excitement about the potential for the driverless car and publicity that would suggest it will be so much safer than the car driven directly by a human being. Eventually that is very likely to be the case. However, for now we should take a moment to remind ourselves that the software written to drive the car and the algorithms created to make decisions come from humans working from a list of likely events and working to their own moral and ethical codes.

In summary, we should make the very most of the opportunities opened up by the “big data revolution” empowered by faster computer processor speeds, increased density and capability of computer storage, and higher bandwidth network connections.  However, if we ignore the fact that our new technologies are created, programmed, deployed, run and maintained by human beings who aren’t advancing in these areas at anything like the rate of our technology, then we are doomed to disappointment and failure. At the very least, we need to apply standards for the governance of data used across our organisations and we need to think through the moral and ethical implications of what we are doing. If we don’t, we face a Global Data Crisis where sufficient devices across our inter-connected digital world are working from untrusted, unreliable or unsuitable data as to cause serious harm to our businesses and institutions.

But, if we can deploy big data technologies with the protection of good governance and a sound ethical framework, we will start to come up with solutions for problems that were previously out of our reach, and we will deliver data enabled products and services that delight and inform us, enhance our work productivity and business sustainability, and contribute a lasting impact to our enjoyment of running a business and of life itself.

Help is available in the form of a Voluntary Code for Data Sharing, developed by Alison Holt during her time as an Academic Visitor at the Oxford Internet Institute, Oxford University in 2015.  The code provides seven maxims for individuals or organisations to consider when collecting or exposing data, together with references for best practice and examples of organisations already applying each maxim.

♣♣♣

Notes:

  • This blog post is based on the author’s paper New Technology Meets Age-Old Problems, in Philosophy & Technology, December 2016, Volume 29, Issue 4, pp 393-395.
  • The post gives the views of its author, not the position of LSE Business Review or the London School of Economics.
  • Featured image credit: Big data, by Nathan Anderson, under a CC0 licence
  • Before commenting, please read our Comment Policy.

Alison Holt is the founder of Longitude 174 Limited, an information technology strategic planning and procurement business, and has recently taken a few months away to carry out research as an Academic Visitor at the Oxford Internet Institute, Oxford University. She is also a Fellow of both the Institute of IT Professionals in New Zealand and the British Computer Society, a Member of the Institute of Directors and a Chartered IT Professional.  Alison is an  expert in the governance of information technology and data and has worked in leadership roles for multiple organisations. She chaired the standards group in 2008 that published the first international standard for the Governance of Information Technology, and is now developing the 38505 series of international standards for the Governance of Data. Her first book (The Governance of IT) was published by the British Computer Society in September 2013, and she is now working on the sequel, The Governance of Data.