The subject is somewhat mixed in what it intends to cover. Intelligence itself will be considered, what our understanding is and what forms it can take. Looking back to the origins of AI, the purpose of pursuing this path of development will be considered and where we might be headed. The relationship between AI and business strategy will also be considered to determine how the combination of humans and machines working together can achieve amazing advances and build significant competitive advantage. Common terms and their meanings have been added to my Obsidian vault in a Glossary of Artificial Intelligence (AI) Terms.
On defining intelligence
The advances seen over the past few years have been staggering to the point they would have been unimaginable ten or 20 years ago. AI is now performing many tasks in business, such as:
- analysing customer, supplier and stakeholder interactions and behaviour
- analysing production environments and highlighting process improvements
- speeding up the activities in research and development
- online chat bots and many more examples
“Artificial Intelligence” as a term was first coined in 1956 by John McCarthy. He stated “for the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving“. McCarthy redefined this in 2004 as “(artificial intelligence) is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable” (Marr 2017)[^15].
Despite the word “intelligence” being used constantly in the discussion, finding an actual definition of the word is difficult. There is apparently no consensus on its meaning, which can create difficulties in both the discussion and the literature. It would appear that any discussion around AI would need to include a definition of intelligence that is being used as a basis for the discussion or paper.
The video demonstrates the difficulty with defining intelligence and the many attempts to do so. Then there is “collective intelligence”. This is found to be a common concept in business and organisations, where it is regarded that the wisdom of the crowd will outperform the wisdom of the individual. This view is argued by James Surowiecki in his book, “The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations”. The concept is also referred to as “networked thinking” [^1]. Guszcza & Schwartz (2019)[^2] discuss collective intelligence and its definition in their article where they interview Thomas Malone, the Director of the MIT Center for Collective Intelligence. Interestingly, it is posited that “collective intelligence” is an emergent property of a group. Malone further states, however, that the concept of “collective intelligence” could apply to a group that includes machines and not just people. The article is a worthwhile read as it puts intelligence in context with the collective intelligence of people and people with machines.
The video below was recommended in the notes. It was made in 2018 and almost predicts what we are experiencing right now. The most significant message of the video is that working collectively with humans and machines is anticipated to be the way of achieving the most beneficial outcomes for humanity.
Past, present and future of AI
It would appear that the conceptualising of AI is nothing new to humans. Adrienne Mayor^3 says in her book that ideas of artificial life and robots appeared in ancient myths. In the realm of science fiction, the earliest example would perhaps be Frankenstein. The video below mentions Aristotle and philosophical attempts to establish an understandable form of human reasoning, which became known as Systolic Logic. Bertrand Russell and Alfred Whitehead then published Principia Mathematica in the early 20th century that provided the foundations for formal representation of mathematics. This then led Alan Turing to use mathematical reasoning in his code breaking machine in 1942.
John McCarthy famously said that “As soon as it works, no-one calls it AI anymore”. There is a parallel here with mental health in that it is described as mental health until physical attributes are discovered and then it moves into the domain of physical health.
In the last 10 years, there have been enormous, yet not always visible advances in artificial intelligence. Then in late 2022, with the advent of ChatGPT and Midjourney, the generation of text and images became a reality when provided with a prompt. The better the prompt, the better the output. This has not been without controversy, particularly in the areas of copyright and plagiarism. AI generated art won first place at the Colorado State Fair in 2022 which generated much criticism (Roose 2022)[^4]. It is arguable that AI generated art is still art, but in a different form. This would be similar to raw photographs printed from film compared to those that have been digitally enhanced.
In 2023, a German artist, Boris Eldagsen, won the prestigious Sony world photography awards with an AI generated photograph (Grierson 2023) [^5]. He declined to accept the prize however, claiming the purpose of the entry was to stimulate debate. Each day, more stunning examples of AI performing in ways not achieved before are coming forth.
Predicting the future is impossible of course, but even experts in the field of AI disagree on timelines and potential achievements. A common thread however when following the subject is the concern over ethics and bias in the training of LLMs.
An article in the Guardian was suggested for reading which has been done and annotated. ‘Why Would We Employ People’ Experts on Five Ways AI Will Change Work“ [^18]
Considerable competition exists in the market at present between the major players over AI chatbots powered by LLMs. The push is to be first to market with leading AI chatbots capable of interacting with humans in areas such as text, speech and image generation. This research pushes the current limitations of Natural Language Processing (NLP) and pursues the ability to deal with more complex tasks. Improving NLP with AI has the potential to facilitate greater use of speech interaction which can be faster than text.
LLMs such as ChatGPT and Bard provide the capability to generate content with amazing speed, but the accuracy is still something that needs to be verified. The current crop of chatbots are known to “hallucinate” on occasions. They will provide compelling content backed up by references to articles that do not exist. There is a recorded example of a lawyer in the US who used ChatGPT to cite cases in a submission relating to a personal injury claim (Carrick & Kesteven 2023) [^6].
Comparisons about regarding which chatbot is the best, however they can change rapidly with the various iterations. It is conceivable to even ask each chatbot for a comparison of itself to others.
Narrow and general artificial intelligence
AI can be defined in several categories, so definitions are important to distinguish one from the other. Essentially though, there are two basic types which are narrow and general.
Narrow AI is a system based around machine learning where a specific problem is addressed. Conversely, General AI applies where machines can solve various problems in a similar way that humans are able to do. General AI is undoubtedly the future direction, with Artificial General Intelligence being the holy grail when a machine can think for itself and learn.
Super Intelligence refers to the point where AI can continuously augment its knowledge and performance and surpass the knowledge of humans.
Walch (2019)[^7] has suggested “Cognitive Intelligence” as a more appropriate term for narrow AI. Although her article has some sound considerations, the departure from a common overarching term such as “AI” could potentially cause some confusion. It should also be noted her article was written in 2019 before the advent of ChatGPT and other AI tools. It is arguable that one of the problems of rapidity in advances is the shifting nature of definitions. Brubeck et. al (2023) [^19] state that “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”. GPT4 is no doubt an extremely powerful advance on GPT3.5 but whether or not it is viewed as AGI, partial AGI or even narrow AI will depend on your interpretation of its abilities and your definitions.
An exploration of use cases used the article “The future is now: Unlocking the promise of AI in industrials” (Borden et al. 2022)[^8]. One of the statements made is that the term artificial intelligence is being overused. They argue that AI needs a new definition and suggest “Al is the ability of a machine to perform cognitive functions typically associated with human minds, such as perceiving, reasoning, learning, interacting with the environment, and problem solving. Examples of Al technologies include robotics, autonomous vehicles, computer vision, language, virtual agents, and machine learning.” The use of the term “cognitive” would appear to support Walch (2019)[^7} in her definition around narrow AI.
Figure 1: The future of work with generative AI.
Being able to predict future events has always been a desire of humans. The ability to make predictions however, is closely linked to intelligence as a result of the requirement to process knowledge and information in a meaningful way that will facilitate a potential number of outcomes from which to choose (Hawkins & Blakeslee, cited in Nagar & Malone 2011, p. 2)[^9]. Although this is regarded as a requirement and indication of intelligence, humans are not good at processing large amounts of data at once to arrive at predictions. In addition, humans are prone to bias based on past experiences, learned behaviour and potentially overly optimistic based on a bias of what we want to happen (Ossola 2019)[^10].
Business relies on its ability to predict with as much accuracy as possible when it comes to things like inventory, cash flow, profit and the like. This used to be done manually, but with advances in computing, the capture and processing of relevant data has made considerable leaps forward. This has allowed more sophisticated modelling to be done that are backed up with the data on which they are based.
Quality of data is critical to making more reliable predictions. Internally, this data can be controlled and refined to ensure it is accurate and useful. However, many areas of business rely on third party data, which at times could even be anecdotal. This can bring the value and accuracy of the data into question and there are opportunities potentially with AI to strengthen the voracity of such data. In an experiment published in 2013, it was found when predicting the success of songs that a combination of humans and machines was more reliable than humans or machines individually (Seifert & Hadida 2013)[^11]. It should be noted however, that this data is now 10 years old and advances in machine learning may have altered the outcomes.
Intelligence Augmentation (IA)
Gartner (n.d.)[^12] defines Augmented Intelligence as “… a design pattern for a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making and new experiences.”. There does not appear to be any concensus on definitions in the literature. The subject notes refer to an article asking some very relevant questions about the relationship between AI and IA. Two comments from the article are considered worth repeating here. The first is the question of “Is replicating human intelligence the most impactful application, and therefore the key goal, of intelligence technology for businesses?”. The second is the claim that “While the underlying technologies powering AI and IA are the same, the goals and applications are fundamentally different: AI aims to create systems that run without humans, whereas IA aims to create systems that make humans better.” (Masih 2019)[^13].
Figure 2: The Artificial Intelligence Spectrum
AI vs IA
A question facing humanity at present is if we wish to progress down a path of replacing humans with AI or enhancing humans with IA. The question raises considerable technological, moral and ethical challenges. Furthermore, it has the potential of engendering fear due to misunderstandings and misinformation. Perhaps one consideration that in some cases AI is the answer and in others IA, so effectively a mix of the two.
The video below also raises some very interesting and valid questions about AI vs IA. Are we going to take professionals and turn them into super professionals, or, are we going to rely on AI doing the heavy lifting with much lower qualified professionals?
AI is certainly a component to be considered in strategic management and the development of strategic objectives. It has the potential to make considerable differences to an organisation’s competitive advantage.
The video below is a refresher on Porter’s theories around generic strategies to obtain strategic competitive advantage.
Obviously, the power represented by AI makes it a valuable strategic tool for an organisation to improve its competitive position. A failure to recognise the potential of AI for the organisation could see them being left behind competitors.
Bandopadhyay (2018)[^14] makes a clear distinction between AI and strategy by referring back to Porter’s definitions of strategy and then asking how AI can be incorporated into the organisational strategy. It would appear she is arguing that there is no AI strategy, but a strategy where AI is a component.
In the video below, Bandopadhyay argues the importance of establishing a base around AI also. Many organisations are putting money into AI and the base and target are required to provide the data on where the money has gone and what was achieved for the investment. She argues that this is the only way organisations can measure the value of their investment.
If we return to Porter’s strategy concepts, then the questions to address around AI could be as follows:
- Do we have a cost leadership strategy? – how can AI be used to reduce costs and improve competitive advantage?
- Do we have a differentiation strategy? – how can we use AI to better identify customers’ preferences and potential new markets?
- Do we have a focus strategy? – how can AI be used to increase market share on best products and services?
By applying these questions to strategic management, we can see how AI fits as but one component of an overall strategic objective.
This first module has focused heavily on definitions of intelligence in its several forms. We also looked at the beginnings of AI and the manner in which it is being integrated into organisations. It has been several years since Alan Turing invented the Enigma Machine and it is here that my opinion is demonstrated. I consider him to be a giant of the computing and intelligence world and his work contributed so much to what we have today. It is such a great shame he was treated in the manner in which he was.
[^1]: Surowiecki, J 2004, The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations, Doubleday, New York.
[^2]: Guszcza, J & Schwartz, J 2019, ‘Superminds: How humans and machines can work together’, Deloitte Review, January, viewed 17 May 2022, https://www2.deloitte.com/content/dam/insights/us/articles/4947_Superminds/DI_DR24_Superminds.pdf.
[^3]: Mayor, A 2018, Gods and robots: Myths, machines, and ancient dreams of technology, Princeton University Press, Princeton.
[^4]: Roose, K 2022, ‘An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.’, The New York Times, September 2, viewed 27 June 2023, https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html.
[^5]: Grierson, J 2023, ‘Photographer admits prize-winning image was AI-generated’, The Guardian, April 17, viewed 27 June 2023, https://www.theguardian.com/technology/2023/apr/17/photographer-admits-prize-winning-image-was-ai-generated.
[^6]: Carrick, D & Kesteven, S 2023, ‘“Use with caution”: How ChatGPT landed this US lawyer and his firm in hot water’, ABC News, June 24, viewed 28 June 2023, https://www.abc.net.au/news/2023-06-24/us-lawyer-uses-chatgpt-to-research-case-with-embarrassing-result/102490068.
[^7]: Walch, K 2019, Why Cognitive Technology May Be A Better Term Than Artificial Intelligence, Forbes, viewed 28 June 2023, https://www.forbes.com/sites/cognitiveworld/2019/12/22/why-cognitive-technology-may-be-a-better-term-than-artificial-intelligence/.
[^8]: Borden K, Huntington M, Kamat M, Singla A, Wijpkema J and Wiseman B (2022) Unlocking the promise of AI in industrials | McKinsey, https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/the-future-is-now-unlocking-the-promise-of-ai-in-industrials, accessed 28 June 2023.
[^9]: Nagar, Y & Malone, T 2011,‘Making Business Predictions by Combining Human and Machine Intelligence in Prediction Markets’, in International Conference on Interaction Sciences, viewed 29 June 2023, https://www.semanticscholar.org/paper/Making-Business-Predictions-by-Combining-Human-and-Nagar-Malone/0ec76ed17ba34a4691ea07d5f36b6c990e9d97b6.
[^10]: Ossola, A 2019, Why are humans so bad at predicting the future?, Quartz, viewed 29 June 2023, https://qz.com/1752106/why-are-humans-so-bad-at-predicting-the-future.
[^11]: Seifert, M & Hadida, AL 2013, ‘3 Humans + 1 Computer = Best Prediction’, Harvard Business Review May, Vol. 91, No. 5, pp. 28–28, Harvard Business School Publication Corp.
[^12]: Gartner n.d., Definition of Augmented Intelligence – Gartner Information Technology Glossary, Gartner, viewed 29 June 2023, https://www.gartner.com/en/information-technology/glossary/augmented-intelligence.
[^13]: Masih, A 2019, Augmented Intelligence, not Artificial Intelligence, is the Future, Medium, viewed 29 June 2023, https://medium.datadriveninvestor.com/augmented-intelligence-not-artificial-intelligence-is-the-future-f07ada7d4815.
[^14]: Bandopadhyay, T 2018, The ’Why AI’ Framework to Start With | LinkedIn, viewed 30 June 2023, https://www.linkedin.com/pulse/whats-your-artificial-intelligence-ai-framework-tapati-bandopadhyay/.
[^15]: Marr, B 2017, The complete beginner’s guide to artificial intelligence, 25 April, viewed 17 May 2022, https://www.forbes.com/sites/bernardmarr/2017/04/25/the-complete-beginners-guide-to-artificial-intelligence/?sh=20deeac34a83.
[^18]: Sabhikhi, A & Sanchez, M n.d., 10 questions frequently asked about augmented intelligence, viewed 7 June 2021, https://www.cognitivescale.com/wp-content/uploads/2017/05/Augmented_Intelligence_eBook.pdf, p. 7
[^19]: Bubeck, S, Chandrasekaran, V, Eldan, R, Gehrke, J, Horvitz, E, Kamar, E, Lee, P, Lee, YT, Li, Y, Lundberg, S, Nori, H, Palangi, H, Ribeiro, MT, & Zhang, Y 2023, ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’, April 13, arXiv, viewed 1 July 2023, http://arxiv.org/abs/2303.12712.