#6

What are the main ethical issues in the development of AI?

AI is a science and a technology that has applications in almost every aspect of our everyday life. We use it when we swipe a credit card, when we search something on the web, when we take a picture with our cameras, when we give vocal commands to our phone or another device, and when we interact with many apps and social media platforms. Companies of every size and business model, all over the world, are adopting AI solutions to optimize their operations, create new services and work modalities, and help their professionals in making more informed and better decisions.

So there is no doubt that AI is a powerful technology that has already imprinted itself positively on our ways of living and will continue to do so for years to come. At the same time, the transformations it brings to our personal and professional lives are very significant and fast, and this raises questions and concerns about the impact of AI on our society. AI systems need to be designed to be aware of, and to follow, important human values so that the technology can help us make better, wiser decisions that are simultaneously human-value-aligned.

Let us review some of the main AI ethics issues:

Data governance.

AI needs a lot of data, so questions about data privacy, storage, sharing, and governance are central for this technology.

In some regions of the world, such as Europe, there are specific regulations to state fundamental rights for the “data subject”, that is, the human being releasing personal data to an AI system that can then use it to make decisions affecting his/her life (see for example the GDPR EU regulation).

Fairness

Based on huge amounts of data that surround every human activity, AI is able to derive insights and knowledge on which to make decisions about humans or recommends decisions to a human. However, we need to make sure that the AI system understands and follows the relevant human values in the context in which such decisions are made. A very important human value is fairness: we don’t want AI systems to make (or recommend) decisions that could discriminate against or perpetuate harm across groups of people (for example, based on race, gender, class, or ability). How do we make sure that AI can act according to the most appropriate notion of fairness (or any other human value) in each scenario in which it is applied?

Software tools are important but they are not enough: also developers’ education and training, team diversity, governance, and multi-stakeholder consultations are crucial for effective AI bias detection and mitigation.

Explainability and trust

Often the most successful AI techniques, such as those based on machine/deep learning, are opaque in terms of allowing humans to understand how they reach their conclusions from the input data. This does not help when trying to build a system of trust between humans and machines, so it is important to adequately address concerns related to transparency and explainability.

Without trust, a doctor will not follow the recommendation of a decision support system that can help in making better decisions for his/her patients.

Accountability

Machine learning is based on statistics, so it always has a percentage of error, even if small. This happens even if no programmer actually made any mistake in developing the AI system. So, when an error occurs, who is responsible? To whom should we ask for a redress or a compensation? This raises questions related to responsibility and accountability.

Profiling and manipulation

AI can interpret our actions and the data we share online to make a

“profile” of us, a sort of abstract characterization of some of our traits, preferences, and values, to be used to personalize services (for example, to show us posts or ads that we most likely will appreciate). Without appropriate guardrails, this approach can twist the relationship between humans and online service providers by designing the service in order to make our preferences more clearly characterized, and thus the personalization easier to compute. This raises issues of human agency: are we really in control of our actions, or is AI being used to nudge us to the point of manipulating us?

Impact on jobs and larger society

Since AI permeates our workplace functioning, it obviously has an impact on jobs (since it can perform some cognitive tasks that usually were done by humans), and these impacts need to be better understood and addressed to make sure humans aren’t disadvantaged. As mentioned earlier, AI is very pervasive and its applicability expands very rapidly, so any negative impacts of this technology could be extremely detrimental for individuals and society.  The pace at which AI is being applied within the workplace (and outside of it) also brings concerns about people and institutions having enough time to understand the real consequences of its use and avoid a possible negative impact.

Control and value alignment

Although AI has a lot of applications, it is still very far from achieving forms of intelligence close to humans (or even animals). However, the fact that this technology is mostly unknown to the general public raises (usually unwarranted) concerns about being able to control it and to make it aligned to our larger and sometimes disparate societal values, should it achieve a higher form of intelligence.

Many organizations (companies, governments, professional societies, multi-stakeholder initiatives) have already been working for years to identify the relevant AI ethics issues, define principles and commitments, derive guidelines and best practices, and operationalize them in their divisions. IBM has been leading in this space, with its tools, educational initiatives, internal governance structure (led by the IBM AI Ethics board), and its numerous partnerships with other companies, civil society organizations, and policy makers. A multi-disciplinary and multi-stakeholder approach is the only one that can effectively drive a responsible development and use of AI in our society.