InvestSMART

Taking IT ethics seriously

IT is a critical driver of business and social innovation but there is a downside and more technology doesn't necessarily lead to a better economy or a better society.
By · 5 Oct 2012
By ·
5 Oct 2012
comments Comments
Upsell Banner

Over the last many years, information technology has been celebrated as a driver of business and social innovation, with an impact which is still not fully realised and has the same scale as previous world-changing inventions such as the steam engine or electricity. Annihilating distance, compressing times, boosting productivity, creating new businesses and business models that were not even imaginable just a one or two decades ago, supporting transparency and democracy, improving public safety, health care, education… and the list goes on..

Some have also looked at the downside of this, as I highlighted in a previous posts. There are three main risk areas:

• Digital divide(s)

• Loss of human control and oversight

• Substitution of human labor leading to permanent unemployment

Let's take a look and each one.

Digital Divide(s)

Many talk about one digital divide between those who have access to technology and those who do not, because of demographics, income, lack of infrastructure, or a combination thereof.

However, there are many other types of digital divide. Between those who have relatively slow access to the Internet and those who enjoy fast broadband services; between those who are able to embrace and fully leverage social networking and those who are either suspicious or vulnerable in sharing information with others; between those who are trained and encouraged by their employers to use technology more creatively and effectively, and those who are bound to use technology that simply automates a pre-existing process.

Two examples may help here.

My mother, who suffers from a rare form of progressive aphasia, has a full-time caretaker, a wonderful Romanian lady who attends to all her needs and practically represents my mother's interface with the world. She uses a laptop and an ADSL connection to stay in touch with her many relatives in Romania, as well as to inform me about my mom's health while I am traveling. A few weeks ago I paid a visit to my mom and found the caretaker in a very agitated state: she had been crying and had not slept at all during the night before. When I inquired about what had happened, she said that her laptop had been remotely blocked by the police, as she was allegedly using it to exchange illegal material and porn, and was about to be fined. I immediately realised this was a scam and reassured her, by showing with a Google search how many people had been affected by the same virus. Luckily enough she did not pay anything, nor did she provide any personal information, and I was able to clean the laptop.

Her disproportionate reaction made me think about the nature of a digital divide: she is perfectly able to use technology for certain purposes, but not being fully proficient with the language could not detect the inconsistencies in the fake police page, nor had she any experience about viruses.

The second example concerns the recent trend to introduce tablets in K-12 schools, usually providing those to students (either free of charge or at a very low price) as well as to teachers. This is not dissimilar from prior initiatives with laptops and netbooks, but probably with a greater potential thanks to the different form factor and the touch-based interface. I discussed this both in Italy (there is a plan in the high school where my wife is a teacher as well as a larger plan targeting the southern part of the country) and in Denmark (where a colleague of mine has a 10 year old son who is entering such a program).

Interestingly in both cases while nobody doubts about the kids' ability to use tablets productively and creatively, there does not seem to be any plan about whether textbooks or specific educational apps will be provided, nor is there any evidence of how teachers will be trained. In this case, those teachers and – as a consequence – some of their students will be stuck on the wrong side of a digital divide.

Loss of human control and oversight

Science fiction movies have taught us the danger of information technology. From HAL 9000 in 2001: A Space Odyssey to Skynet in The Terminator, from PreCrime in Minority Report to ARIIA in Eagle Eye, very powerful computers and computer networks try to take over human decision-making power, often by misinterpreting or too literally interpreting a human directive. Of course reality is different from science fiction, but we have had a few glimpses of this with the technology-induced velocity of the global financial crisis, as well as with some of the cyberwar scenarios that get discussed from time to time.

With more people and devices connected to the internet, with growing machine-to-machine interaction, ensemble programming, cloud-based applications, big data crunching, oversight of IT applications becomes increasingly challenging.

Today concerns about cloud computing revolve around ensuring that personal and other sensitive data stay in jurisdictions where they are duly protected and that vendors operate secure and auditable infrastructure. But as connectedness becomes the norm, the scale of cloud-based infrastructure will grow and zillions of enterprise and consumer-grade devices and applications will interoperate more and more autonomously: existing norms, policies and tools for accreditation and auditing will not suffice.

Job substitution and permanent unemployment

Any major technology innovation or revolution has displaced existing jobs and created new ones. Of course all these have been painful for people who lost their jobs, but their sons and grandsons ended up enjoying a better standard of living. In some cases they have had tragic consequences for some – like technologies used for weapon systems – but they have ultimately led to growth and prosperity – as those have been used to improve transportation, safety and energy production.

The information revolution may be different. Technology allows to automate and replace thousands of processes and roles, creating new digital opportunities for new products and services that can hardly compensate the velocity at which jobs are being displaced. Blue and white collar jobs, intermediaries, and a lot of “human middleware”, as my colleague Mark McDonald calls it, are just gone. The cashier at the supermarket or the bank teller or the ticketing staff at the station, the mailman, the administration staff in enterprises, workers on the shop floor, people working in bookshops and in the printing industry, staff working in music shops and recording companies video rentals… the list is endless.

Even in IT, presumably one of the newest industries, roles are being replaced by programmable machines: infrastructure and operations staff replaced by cloud technology, helpdesk staff replaced by self service portals and peer support social networks, professional application developers replaced by citizen developers. And further innovations on our doorsteps will replace traffic wardens, bus and truck drivers, police officers, and so forth.

New jobs are indeed being created, like information aggregators, technology service providers, digital marketers and more, but they require deeply different skills than those being displaced, and the pace of change is just unsustainable for those left behind.

It could be argued that technology will make society wealthier as a whole, and there would be ways to implement a new form of welfare, to dampen the effect of job substitution: on the other hand, many countries suffer from a very large debt and their ability to sustain current welfare models, let alone moving toward even more generous one, is debatable.

It is time to take responsibility

The IT industry and IT professionals in user organisation cannot live any longer just under the auspices of progress and growth, assuming that more technology equates a better economy or better society. Policy makers who pursue digital agendas as a way to boost economic growth need to examine more carefully what are the consequences of nurturing unsustainable technology growth.

Companies and governments have become more sensitive to environmental sustainability. It is common to see advertisements where an organisation will plan trees or contribute to a park to offset the carbon production caused by its processes. Why shouldn't the same apply to societal impact caused by the use of technology? For every job displaced at least a new one should be created. For any innovation that leaves some stakeholders behind, measures should be taken to bridge that gap as a matter of priority, better, as a precondition for that innovation to take place.

Most companies believe they do this already through their corporate social responsibility. But there needs to be a more direct connection between those programs and the IT innovations they deploy internally and externally.

Governments should look more closely at how to regulate and supervise the use of technology, when this can have an adverse impact on citizens and consumers. As more and more “smart” technology is deployed in devices, buildings, cars, objects, and influences how services react to the context in which an individual lives and works, the risks deriving from malfunctions, data aggregation, and behaviors determined by predictive or prescriptive data analysis will be more numerous, more severe and – at the same time – more difficult to detect and cope with.

This is way too complex to be effectively regulated without stifling progress and competition. Containing these risks requires all stakeholders to take responsibility and act responsibly, in cooperation with each other.

A key step is to establish a better concept of IT ethics or, more in general, technology ethics. This means to agree on a set of principles that should apply to the deployment of technology that can have a direct or indirect adverse impact on an individual, be that person a client, a citizen, a partner or a citizen. This should apply both to downstream projects, where technology is being deployed to change processes and transform services, but also to more upstream activities, where new technologies are being applied for a pilot initiative.

Projects should identify clear success metrics that do not relate only to business improvements or customer satisfaction, but also to foreseeable negative impacts (such as job substitution, misuse of public or personal data, context-driven user profiling), and state how they will compensate these effects. Where are new jobs being created and how will displaced employees acquire required new skills? How will users be given ways to control the contextual information that concerns them, or even just a notice that information is being collected so that hey can opt out? What oversight mechanisms are put in place to prevent, identify or confine “ripple effects” caused by information velocity?

Consumers and citizens must become “smarter”

Unless the end users understand they have a duty of care when they accept or expose themselves to technology innovation, there is no way that industry and governments can effectively manage the ensuing risks by themselves. This implies that end users should exercise the right (and the duty) to be adequately informed and educated about both benefits and risks, and should join forces through established groups (such as consumer associations) as well as virtually (i.e. leveraging social networking).

This does not mean to be or become luddites, adverse to any sort of technology innovation. It means that a better and more transparent balance must be sought between the positive outlook associated to the use of smarter technology and a clear understanding of their implications in terms of personal privacy, freedom, socially acceptable behaviors, and so forth.

Waiting for industry to put generic warning on their products' and services' labels, as they do with cigarettes, or for governments to start information and awareness campaigns is not enough: understanding of and sensitivity to the risks must be primarily developed from the bottom-up, with consumers and citizens posing tougher questions to administrators who plan for a smart city or to providers proposing wonderful new context-aware services.

This is an edited version of a post by Gartner's Andrea Di Maio. Mr Di Maio is a vice president and distinguished analyst in Gartner Research, where he focuses on the public sector, with particular reference to e-government strategies, Web 2.0, the business value of IT, open-source software. You can read his other posts here.

Share this article and show your support
Free Membership
Free Membership
Andrea Di Maio
Andrea Di Maio
Keep on reading more articles from Andrea Di Maio. See more articles
Join the conversation
Join the conversation...
There are comments posted so far. Join the conversation, please login or Sign up.