My MBA Journey

Record of my personal journey completing an MBA

Week 2 – AI and Ethics

chatgpt

Introduction

This module intends to explore the vexed question of ethics in AI. Ethics in AI has gained much momentum and awareness of late since the deployment of ChatGPT. However, the issue of ethics has been in the background for some time, particularly in the area of the application of algorithms on social media platforms that can engage in social engineering. The significant issue is that ethics has risen to prominence and requires serious consideration by any organisation intending to implement AI systems.

Consideration in this module will be given to ethical values. These include areas such as:

  • Legal Compliance
  • Privacy and security of data
  • Transparency around usage
  • Fairness, equity and bias
  • Accountability for operation of AI
  • Responsible usage

It is worth noting that because AI is advancing so rapidly with new capabilities and applications occurring almost daily, that new challenges and questions come with that development. For this reason, organisations need to be on the alert for any implications and externalities that may result from the deployment of AI. The lack of legal frameworks in many ways means that ethics, and their application, become even more important with this frontier technology.

The video below, raises several ethical issues worthy of consideration around AI.

Definitions of ethics

Problems exist in defining what ethics are because of the subjectivity of the language. The Collins Dictionary [^1] defines ethics as “the philosophical study of the moral value of human conduct and of the rules and principles that ought to govern it”. The question then becomes one of whose moral values?

The Ethics Centre [^2] offers a more lengthy definition:

Ethics is the process of questioning, discovering and defending our values, principles and purpose. It’s about finding out who we are and staying true to that in the face of temptations, challenges and uncertainty. It’s not always fun, and it’s hardly ever easy, but if we commit to it, we set ourselves up to make decisions we can stand by, building a life that’s truly our own and a future we want to be a part of.

Although the definition is an attempt at being both thorough and personal, it still refers to “our values, principles and purpose”. The definition is still open and subjective because our values may differ from other people’s values. There is also the cross-cultural aspect to be considered.

The Ethics Centre ^2 discusses how ethics are not involved in every decision we make. For example, if you live in a small rural town and you can buy something cheaper online, the price is an obvious consideration. However, is it the right thing to do ethically? Only the person deciding can make that determination, but they have been confronted with an ethical dilemma.

Six Questions When Facing Ethical Dilemmas

The Ethics Centre ^2 provides questions to ask yourself when facing an ethical dilemma:

  1. Would I be happy for this decision to be headlining the news tomorrow?
  2. Is there an ethical non-negotiable at play?
  3. Will my action make the world a better place?
  4. What would happen if everybody did this?
  5. What will this do to my character or the character of my organisation?
  6. Is this consistent with my values and principles?

Exploring these questions could assist with making a decision which is ethical in most cases. In the new world of AI, however, there can be many unknown variables that could make the exploration of these questions more challenging.

Ethics and AI

We have witnessed the ethics surrounding AI becoming of greater concern in the growing social discourse. A recent example would be the call by several AI luminaries to discontinue any further AI development for six months (Paul 2023) [^3]. Although nothing happened, there was a wide debate in the media and elsewhere around the issue. The novelty of AI brings with it the fear of change, as with any new technology, which is perfectly understandable.

Social discourse on the ethics of AI has increased, so organisations have created and published their own statements. As examples, you can find Google framework at: Google AI Principles – Google AI. The author has also published his own ethics statement on his website, which includes a section on AI at Ethics Statement – Ric Raftis.

Several organisations have also become established to consider the important of ethics and development of AI. These include:

Sacha Baron Cohen Calls Out Mark Zuckerberg

The video raises some interesting issues around ethics and social engineering. Furthermore, it strengthens a personal view that the more money there is to be made, the more elasticity is applied to ethics. One area of the video I do question though is that regulation and democracy is the answer. Governments are already too far behind and, in my view, our democracies are corrupt, irrespective of the party’s colour. The only way to get rid of that is to have publicly funded elections. There is also the concept of antifragility and the importance of volatility in systems to make them succeed over time.

Principles for Ethical AI

Ethical business practices are becoming more and more important in influencing the buying behaviour of consumers. Examples of such concepts are where organisations conduct their business in a manner not aligned with society’s expectations. An organisation may be utilising slave or child labour, abusing the planet, little regard for the environment and even mistreating employees. With the virtual immediate flow of information and a 24 hour news cycle, society is more informed than ever and can respond with great speed to any indiscretions of business.

Arnold & Scheutz (2018) [^4] argue in regard to ethics in AI that:

…the most effective way of ensuring that a system will abide by a set of ethical principles is to represent these principles explicitly in the system and design reasoning and decision-making algorithms to essentially use these principles”

This is an interesting argument because these inbuilt ethics are still developed by humans and can have inappropriate bias. The more commercially oriented the application as well, the more one could argue that ethics increase their level of elasticity.

Arnold & Scheutz (2018) [^4] do go on to recognise however that “… it would be naive to assume that all AI systems will be designed in that way”. They further suggest that a lack of ethical frameworks would be the norm rather than the exception.

Figure 1: Values AI needs to respect.

Figure 1: Values AI needs to respect.
Source: Microsoft Corporation 2018 [^5]

Fairness

AI needs to be fair. Everyone should be equal in the eyes of the AI and their treatment should reflect a balanced manner. Computers work on the basis of logic and can make fairer decisions because they are not subjected to the emotions and biases of humans. That, of course, is the ideal, but AI models are built by humans and bias can be inadvertently built in to the models. (Microsoft Corporation 2018).

A well-publicised case of bias was the one where Amazon developed an AI to process job applications. The model had been built over several years relying on historical data where male candidates were dominant. As a result, the AI began to demonstrate bias towards male applicants, which was obviously unfair. Although Amazon attempted to modify the AI to be gender neutral, the program was abandoned (Dastin 2018) [^6].

The notes provided examples of AI programs that can predict from social media posts if a person will get depressed in the next six months. Likewise, if a woman might get pregnant. If HR departments had access to these programs, it would permit discriminatory practices against these people which is clearly unfair for the people involved. The question of course is if the organisation sees it this way.

One of the best ways to avoid bias is to ensure that developers have diversity in their workforces and train the models on large amounts of diverse data. Although governments unquestionably have a role in the regulation of AI, they struggle to keep up with developments. Despite this, there are currently several attempts around the world to implement discussion papers and regulation (Dalmia & Schatsky 2019) [^7]. Dalmia and Schatsky (2019) also point to initiatives by major technology companies in developing AI toolkits. These include IBM, Facebook and Google, but outside of industry the example is given of the Ethics and Algorithm Toolkit developed by a number of agencies to assist local government with AI decision making (Quaintance cited in Dalmia & Schatsky 2019) [^7].

Reliability and safety

AI is no different to any other technology in that it has to prove itself to be both reliable and safe. The higher the risk of an AI having unexpected consequences, the more rigorous the testing needs to be (Microsoft Corporation 2018) ^5.

Lorica and Nathan (2019) [^8] identified in their research, shown in the Figure below, that only 34% of respondents across all of the sectors surveyed, tested for safety and reliability. It was found however, that this figure increased in the health and life sciences area. When looking at the Figure below however, it could be arguable that several of the other tests form part of safety and reliability anyway.

Figure 2: Risk checks in machine learning (ML) models by type.

Figure 2: Risk checks in machine learning (ML) models by type.
Source: Lorica & Nathan, 2019, AI Adoption in the Enterprise, O’Reilly Media.

It would be reasonable to argue that a fair comparison exists between software and AI, after all, AI is running on software. There is ample evidence presented in the literature of finding bugs and users finding ways to break software that had not been tested by the developers for those particular flaws (Whittaker 2000, Whittaker 2002). Given the propensity of users applying use cases that have not been considered in the laboratory, is it reasonable then to assume that when AI is in the “wild” that similar events will occur? Arnold and Scheutz (2018) ^4 provide two examples in their paper regarding a rogue rescue robot and a compromised firewall. Both could be considered valid scenarios. Hunt (2016) [^9] relates the story of the Microsoft AI chatbot that was trained in racism by users and had to be taken down.

Privacy and security

According to Microsoft (2018), if people do not trust the organisations collecting their data, securing it properly and ensuring privacy is maintained, then they may stop sharing their data. Security is one of the greatest concerns on the internet and one only has to look at the several recent examples of large data hacks to understand why this is the case. According to Dalmia and Schatsky (2019), 9.7 billion data records have been stolen or lost since 2013. It is conceivable that with the growth of the internet that this figure may well have increased considerably since that time.

It seems that it is not just hackers who are illegaly accessing this data. Cimpanu (2020) [^10] relates the story of Clearview AI who were the subject of a class action. They apparently scraped the web for people’s photos on social media profiles and built an enormous facial recognition database. The company was based in New York, but the complaint filed in Illinois under that state’s Biometric Information Privacy Act (BIPA). Not enough information is known about US jurisdictional law to provide an explanation of the process.

Inclusiveness

At the heart of this particular value is the concept that AI should empower all people, not just a select few. An alternate term used is the democratisation of AI. Without doubt, AI has the potential to enrich developing nations through education availability and have huge impact on poverty.

The concept of inclusiveness has a reverse side as well in that it should not be used to exploit people. Although the concept and idea is sound, the reality is questionable when one considers the level of social engineering already conducted by social media organisations. The movie, “The Social Dilemma” provides an excellent example of how people can be manipulated without their knowledge.

Then there is the growing concern around humans forming relationships with AI and robots. This devalues the human connection and the authenticity of the relationship. In turn, this raises considerable ethical questions around whether or not AI should be programmed with abilities to respond in a manner to humans that will evoke an emotional response.

As evidence of this concern, Zeng (2015) [^11] proposed the following as a potential AI law:

“Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.”

Even though these problems are recognised, some people appear to have a desire to actually interact with AI personas. Arslanagic-Wakefield (2023) [^12] relates the story of an app where people could design their AI friend which allowed the sending of explicit messages. Although this was stopped, it caused considerable consternation for the users relying on the service to satisfy their relationship needs.

Transparency and accountability

These two principles are the foundations on which the others are built (Microsoft 2018) ^5 . Principles are one thing, but communicating them can be a challenge. There is no doubt that people must be offered the opportunity to thoroughly understand how AI works, uses data, secures privacy and makes decisions. However, in real life, it could be argued that Terms and Conditions statements that require agreement prior to using a website do the same, yet how many people actually read them? Perhaps this transparency comes after the event when the question is raised as to why a loan application was declined, or why someone was unsuccessful with a promotion or job application.

The above issues relate to end users however, yet the the owners of the AI who deploy the models require transparency about how they work also. Microsoft (2018) ^5 argue that the publishing of an algorithm does not necessarily guarantee how a model reaches a decision. Much of what happens inside the model could still be described as a Black Box.

When it comes to accountability, this is probably much less complicated and could be compared with other products and services. Anyone who designs and deploys a system must be accountable for how it works (Micorsoft 2018) ^5. Accountability can be in two forms. It can be the regulations and legislation put in place by governments, but there is also accountability from an ethical perspective. The problem with the former is the advances in AI are so rapid that government law making processes cannot maintain pace.

Moving forward with AI

There is no doubt that AI represents considerable challenges for the world at large, not just business. For all intents and purposes, it is uncharted waters where historical learnings can be used, but adaptation will possibly be necessary. Dalmia and Schatsky (2019) ^7 provide a detailed list of suggested steps to consider when contemplating AI outcomes from roll out, safety of algorithms and mitigation of risk to society at large:

  • Acknowledge the need for ethics in the AI era. Create an AI ethics panel or task force by tapping into the expertise of the private sector, startups, academia, and social enterprises.
  • Create an algorithmic risk management strategy and governance structure to manage technical and cultural risks.
  • Develop governance structures that monitor the ethical deployment of AI.
  • Establish processes to test training data and outputs of algorithms and seek reviews from internal and external parties.
  • Encourage diversity and inclusion in the design of applications.
  • Emphasize creating explainable AI algorithms that can enhance transparency and increase trust in those affected by algorithm decisions.
  • Train developers, data architects, and users of data on the importance of data ethics, specifically relating to AI applications.

Application of the above steps, in conjunction with the six principles of ethical AI (Microsoft Corporation 2018) ^5 should provide the ability to address the Six Questions When Facing Ethical Dilemmas discussed earlier on asked by the Ethics Centre.

The Figure below has been generated by Whimsical. This platform is used to generate ideas using AI based on a topic. In this case it was Ethics and AI. The mind map raises a number of interesting and relevant issues to explore.

Figure 3: Ethics and AI Mind Map

Figure 3: Ethics and AI Mind Map
Source: Generated by author with AI assistance in Whimsical.

UNESCO published a document entitled UNESCO’s Recommendation on the Ethics of Artificial Intelligence which was adopted by member states in November 2021. The document is dated January 2023 online, but also notes it was updated in June 2023. It could be argued that these dates, and the currency of the document, are a reflection of the rapidity of change in the area.

Below is an additional video from the Oxford University’s Institute of Ethics on the subject of ethics and AI.

Conclusion

This module has explored ethics in considerable detail as they pertain to AI. Although there have been several red flags raised around misuse of AI, there are also many positive benefits for society. Like any new technology, there is no doubt that some people will abuse it, but if democratised across the planet, the benefits are difficult to even conceive. We can only hope that through the application of ethics, open source and democratisation that AI will deliver the greatest leap forward in evolution yet seen by humankind.

Leave a Reply

Your email address will not be published. Required fields are marked *

Ric Raftis

Ric Raftis

Find out more about me on my About Me page.

Share this post:

Read & Learn More

More From The Blog

Inspirational content to help you shift your life into the path of success