The integration of AI into business necessitates effective leadership. Resistance within the organisation may arise, with individuals feeling threatened by job loss or change – a common occurrence when new technologies are introduced.
Alongside leadership, strategic management is crucial in the implementation and deployment of AI. This involves detailed planning and execution to ensure that personnel are included in the process rather than being subjected to it, adhering to the principle: ‘do with people, not to people.’ In this module, we will explore strategies for the successful deployment of artificial intelligence models. We will utilise a method known as the Strategy Choice Cascade Framework (Eggers et al. 2019) [^1]. The framework examines five key considerations regarding strategic leadership in AI deployment.
The following video examines a dialogue between two individuals, Michel van der Bel, President of Microsoft EMEA, and Heike Bruch, Professor of Leadership at the University of St. Gallen, about AI’s impact on leaders and leadership.
Notably, it explores how senior executives are adjusting to the incorporation and initiation of AI within their organisations. Part of this conversation highlights the disparity in AI implementation between slow-growing companies and rapidly expanding ones, along with potential implications.
One could argue that fast-growth firms’ expansion is due to their adoption of AI, posing a risk for low-growth firms losing their competitive edge.
Obstacles to developing an AI strategy
In this module, our focus on leadership diverges from conventional theories previously explored. We aim to examine leadership in the context of artificial intelligence (AI) tool implementation within organisations. The crux lies in identifying the requisite skills for a leader to facilitate a smooth transition, uphold competitive advantage and engage employees in an invigorating journey with AI to achieve real transformation within the organisation.
Fountaine, McCarthy, and Saleh 2018 (2018) [^2] posit that leaders often err in their approach to AI implementation by treating it as a plug-and-play solution. This may stem from insufficient understanding of the technology, its implementation requirements, and the time frame for achieving results. Consequently, strategic planning is crucial, along with establishing suitable KPIs and timelines to monitor progress.
Certainly, this should encompass staff training and management upskilling for optimal comprehension of the technology. This is pivotal to maximising its benefits. McCarthy and Saleh (2018) [^2] go on to argue that strategic planning and alignment of a company’s culture are both crucial, particularly in supporting the adoption of AI. Established organisations may well find that their structures are set up to work in opposition to AI implementation. Many organisations have already transitioned into digital entities. This shift to implementing AI simply represents the next phase in their continuous transformation process as a means of sustaining and improving their competitive advantage.
Offering comprehensive training on AI’s functionality and advantages within the organisation can instil confidence in staff, encouraging them to utilise data and information generated by the AI for implementation. Training and understanding can have the ability to help overcome the “last mile” problem mentioned in Manyika and Bughin (2018) [^3]. This concept parallels the traditional sales principle of ‘know, like, trust, and buy.’ Here it involves gaining knowledge about AI, appreciating its capabilities to establish a position of trust, with ‘buying’ representing the implementation process in this context.
Preparedness of leadership
As an evolving digital technology, AI has often found leaders ill-prepared. The advent of AI and the digital economy necessitates a transformation in organisational thinking and working methods for maintaining competitiveness. Citing the Sloan Management Review in Ready et al. (2020, p. 4), it was revealed that numerous leaders weren’t advancing at the required pace towards the digital economy, showing potential resistance to re-evaluating current structures and practices.
According to the module notes supplied, the Sloan Management Review (cited in Ready et al. 2020, p. 4) identifies three principal challenges concerning leadership within the context of the digital economy. (Note: this quote has not been confirmed as the author was not able to access article behind paywall.)
- A deficiency in digital savviness coupled with outdated mindsets.
- A series of blind spots that prevent them from seeing a clear path forward.
- Multiple embedded tensions that undermine strategic execution.
Figure 1: Are Leaders Really Prepared?
Ready et. al (2020) [^4] have identified ‘blind spots’ as a second key area in relation to AI. These are further divided into four categories: strategic, cultural, human capital and personal.
Firstly, the strategic aspect refers to an insufficient understanding of the transformational impact of AI on their organisation and its long-term implications.
Secondly, the cultural element involves resistance to new behaviours and attitudes essential for effective AI integration. The reluctance to adapt from traditional methods can hinder success unless a significant cultural shift occurs within the organisation.
Thirdly, there is often inadequate consideration given towards human resources. What skills might be required or which staff members may be necessary following successful implementation of AI? What questions arise regarding talent attraction and retention strategies in this new organisational model.
Lastly, it’s crucial that leaders surround themselves with trustworthy individuals possessing comprehensive knowledge and skills needed for driving forward technological advancement. This personal factor doubles as a cultural issue if misalignment exists between leadership actions and organisational direction.
Ineffective compared to effective strategies
Implementing AI strategy is not merely about technology selection, but involves a multitude of considerations. Firstly, the existing IT infrastructure must be examined. Evaluating the staff’s skill sets and potential training needs is also essential, along with an analysis of the current situation regarding data storage and the reliability of that data if it is to be used for training (Eggers et al. 2019) [^1].
Moreover, it’s crucial to assess individuals and organisational culture in relation to AI integration. Risk assessment plays a vital role as well, including any security and privacy issues. Furthermore, governance and ethics are other significant facets that cannot be overlooked.
Crucially recognising that adopting AI within an organisation extends well beyond simply purchasing and installing AI solutions.
The following video is presented by Dr Andrew Ng who is a co-founder of Coursera and world-leading AI scientist. He speaks on the importance of not just getting excited about the technology, but more importantly, looking closely at the value that AI can deliver to your organisation.
A very valid point made in the video is yet again to “start with the end in mind”. Look to the organisational outcomes you are seeking to achieve and then seek out a solution to achieve that.
Developing an effective AI strategy
In light of the aforementioned issues that may compromise an effective AI strategy, we now turn our attention towards the development of a robust AI approach. Eggers et al. (2019) [^1] and his colleagues present a framework referred to as Strategy Choice Cascade in their report “Crafting an AI Strategy for Government Leaders”. The framework was originally conceived by A.G. Lafley and Roger Martin. This framework is predicated on five pivotal questions designed to shape a comprehensive business strategy.
Eggers et al. (2019) [^1], however, have refined this model ensuring its clarity and integration across both technical and managerial aspects. The original five questions, have been divided into corresponding management and technology streams as depicted in the image below:
Figure 2: The AI Strategy Choice Cascade.
GPT-4 (Generative AI, 29 July 2023) was asked to provide suggestions of an outline for developing an AI implementation strategy. The prompt used was:
“You are a strategic planning consultant that specialises in AI implementation. Provide me with an outline of the areas that need to be addressed in a proposal, irrespective of organisation size, such as technology considerations, culture, training, etc.”. The following outline was returned. It should not be considered complete or the perfect model, but an example of potential headings for a Strategic AI Plan.
AI STRATEGIC PLAN OUTLINE AS SUGGESTED BY GPT-4
I. Executive Summary
Brief introduction of the proposal, highlighting that AI implementation is not just about technology but also about aligning it with business strategy, people, processes, and culture.
II. Business Case For AI Implementation
A detailed explanation of why the company needs to implement AI. This will include improved efficiency, cost reduction, enhanced customer service, and prediction accuracy among other factors.
III. Current State Analysis
An assessment of the current state of technology in the organisation, including data architecture, data quality, technology infrastructure, existing skill sets and company’s digital maturity.
IV. Technology Considerations
- AI Technology Selection: Insights on how to go about deciding the right AI technologies based on business needs, budget, existing IT infrastructure, etc.
- Integration with Existing Systems: Outline how the chosen AI solution can fit in with the existing technology stack.
- Data Management: Insight on how to manage, store, analyse, and protect the mass amounts of data generated by AI systems.
- Security and Privacy: Consideration of measures to ensure data credibility, security, and user privacy.
V. People & Organisation Culture
- Workforce Training: Propose suitable training programs to equip the workforce with requisite skill set to work with the new technology.
- Organizational Culture: Discussing the changes AI implementation may bring about in the company culture and the steps to manage the transformation smoothly.
VI. AI Governance & Ethics
Insights on ethical considerations when it comes to AI, covering issues related to fairness, accountability, and transparency. Also, setting standards and processes to manage and monitor AI systems.
VII. Implementation Strategy & Roadmap
Detailed steps and timelines for AI implementation, considering factors like urgency, impact, complexity, dependency, and costs.
VIII. Risk Assessment
Analysis of potential risks associated with AI technology implementation and ways to mitigate them.
IX. Measures of Success
Establishing clear measures through which success of AI implementation can be judged, such as impact on revenue, customer experience, efficiency, or cost savings.
Brief recap of the proposal, stressing the potential benefits of AI and the commitment to handle all aspects of the implementation process.
Additional supportive information as needed.
1. Define the vision
The initial step in establishing your AI implementation ladder is acknowledging that the process will not be linear. Many organisations erroneously approach projects with a linear mindset, however, it’s crucial to understand that they are invariably non-linear.
In defining your vision, Lafley and Martin (2013) [^5] pose the question: what is our winning aspiration? The book demonstrates how Proctor and Gamble approached strategy. Whilst not specifically related to AI, the concept has considerable similarities.
Microsoft provides an illustrative example of this with their four-pronged vision for AI implementation (Microsoft News Centre 2016) [^6]. This outlines the transformative goals sought by the organisation for AI implementation and democratisation.
- We’re going to harness artificial intelligence to fundamentally change how we interact with the ambient computing, the agents, in our lives.
- We’re going to infuse every application that we interact with, on any device, at any point in time, with intelligence.
- We’ll make these same intelligent capabilities that are infused in our own apps—the cognitive capabilities—available to every application developer in the world.
- We’re building the world’s most powerful AI supercomputer and making it available to anyone, via the cloud, to enable all to harness its power and tackle AI challenges, large and small.
Microsoft’s AI policy commendably aims to democratise artificial intelligence, an essential aspiration for promoting global equity. Entities like Microsoft have the capacity to offer virtual education to developing countries, thereby fostering their development and significantly improving living standards.
The first point in Charlevoix Common Vision for the Future of Artificial Intelligence (Charlevoix G7 Summit 2018) [^7] article was to prioritise a human-centric approach in AI development. It then goes on to say in point 2:
Promote investment in research and development in AI that generates public trust in new technologies, and encourage industry to invest in developing and deploying AI that supports economic growth and women’s economic empowerment while addressing issues related to accountability, assurance, liability, security, safety, gender and other biases and potential misuse.
The concepts of trust and transparency are raised several times in the document which is congruent with contentions by Eggers et al. (2019). It is noted that the vision statement by Microsoft omits the transparency and trust factor, however, it is included in a secondary document, The Future Compacted, (Microsoft 2018, p. 34) [^8] where it states:
We’re building AI systems that are designed to amplify natural human ingenuity. We’re deploying AI systems with the goal of making them available to everyone and aspiring to build AI systems that reflect timeless societal values so that AI earns the trust of all.
Aspirations and statements like the above engenders confidence that AI development will be human-centric and developed in an ethical manner for the benefit of humankind.
2. Determining focus
Eggers et al. (2019) suggest the next phase in AI investment should be a concentrated effort within organisations, with a multitude of potential methodologies to consider, particularly regarding the type of technology and human integration involved in deploying artificial intelligence.
Eggers further scrutinises the necessary levels of AI integration and identifies three key areas: back office operations, customer engagement, and mission focus. These sectors are universally relevant across most organizations.
The back-office primarily involves administrative tasks where AI can alleviate humans from time-consuming processes such as report generation, data production and validation. This application of AI could liberate staff from lower-order duties enabling them to engage in more valuable pursuits requiring creativity.
Society is subjected to numerous algorithms across social media, news feeds, advertising and storefronts which employ social engineering principles and confirmation bias to present content. The benefits or drawbacks of this are subjective; it may save time but can limit exposure to alternative viewpoints.
Mission focus demonstrates how AI can be utilised to enhance information accessibility for managers and leaders for improved decision-making processes and strategic policy formulation within an organisation. This ultimately aids in maintaining or augmenting competitive advantage. Given the vast amount of data available within an organisation, this can be supplied to an AI system for improved forecasting models based on past experience.
Regarding technical aspects when deploying your AI system, one critical factor is determining the level of human involvement in technology usage – whether humans will be involved alongside autonomous processors or not requires careful consideration. This ensures staff comfortability levels are maintained while avoiding over-dependence on machines before thorough testing is completed.
Lastly, and potentially most importantly, ethical considerations surrounding deployment must also receive due attention.
3. Scoping and measuring success
In the preceding section of this module, we discussed that one of the primary causes for AI implementation failure is setting unrealistic expectations, as noted by Eggers et al. (2019) [^1]. Besides high costs, they highlighted a lack of understanding about AI or its governmental implications as another significant barrier to successful implementation.
This ignorance concerning AI often plagues various businesses and organisations. Their awareness can typically stem from news reports or social media content, which in turn creates an environment where the in-depth knowledge required to initiate an organisational transformation process is lacking. Consequently, linking back to step one regarding vision, it becomes crucial to elaborate on how integrating AI will benefit the organisation and align with its strategic direction.
In addition, maintaining open and honest communication is vital for fostering trust and transparency throughout all stages. The importance of ethics cannot be overstated during proposal development and execution processes.
Eggers et al. (2019) [^1] also suggests that maintaining employee trust is crucial when discussing AI’s organisational value. Initially, it may be beneficial to concentrate on eradicating mundane tasks, thereby augmenting employees’ skills and personal growth. This approach can foster a positive perception of AI integration within the business as it enables workers to engage in more stimulating work. However, if employees perceive AI as jeopardising their jobs, the organisation could encounter implementation challenges requiring attention and time for resolution. Therefore, an appropriate initial approach can significantly save time.
Another consideration in implementation processes is the associated goals. Initially unachievable objectives may become attainable within months due to rapid advancements in AI. It is equally important to monitor developments in AI, alongside cost and value fluctuations. A previously shelved project due to high costs may regain viability as technology progresses towards increased achievablility and scalability.
4: Assess capabilities and requirements
The AI ladder vision, as depicted below, provides an effective roadmap for organisations to evaluate their capabilities and design a feasible path towards successful implementation. This framework considers the organisation’s staff, skill sets, management structure and necessary infrastructure. Employing this AI ladder model will furnish all essential information needed to formulate or establish an Artificial Intelligence (AI) implementation policy. This strategy responds comprehensively to the Capabilities: Strategic Choice Questions posed by Eggers in ‘Crafting an AI Strategy for Government Leaders’ (2019, p. 16) [^1].
Figure 3: The AI Ladder
5. Developing AI management systems
Eggers et al. (2019) posit a fundamental question in this stage: “What systems will implement and manage AI”? Upon establishing the implementation procedures within an organisation, it is crucial to put efficient management processes in place. The deployment of AI should be evaluated not solely based on performance but also in terms of its impact and cost-effectiveness. In addition, potential outcomes must be contemplated alongside ethical considerations, transparency issues, and trust-related concerns pertaining to all organisational stakeholders.
There is an emerging misconception that artificial intelligence (AI) is a technology to simply set up and leave unattended. This, however, cannot be more incorrect. Even autonomous processes require management and continuous monitoring to ensure they function as intended. AI lacks human judgement or conscience and does not comprehend societal norms or ethics in the same way humans do. Thus, it presents the possibility of unforeseen consequences which necessitate detailed examination and regular supervision. This point was underscored by an incident included by Microsoft (2018, p. 64) [^8], demonstrating why human oversight of AI is critical.
… [an AI] system designed to help make decisions about whether to hospitalize patients with pneumonia “learned” that people with asthma have a lower rate of mortality from pneumonia than the general population. This was a surprising result because people with asthma are generally considered to be at greater risk of dying from pneumonia than others. While the correlation was accurate, the system failed to detect that the primary reason for this lower mortality rate was that asthma patients receive faster and more comprehensive care than other patients because they are at greater risk. If researchers hadn’t noticed that the AI system had drawn a misleading inference, the system might have recommended against hospitalizing people with asthma…
Microsoft (2018) [^8] also identify the issues around problems with adding malicious data to models and attacks on systems. Therefore, the supervision and governance of AI systems are crucial to prevent data corruption and unintended system performance. Without suitable due diligence, there is a risk that subtle data corruption may occur, feeding models with incorrect information leading to non-obvious bias. Thus, the importance of stringent monitoring and governance cannot be overstated.
In conclusion, the scalability options of AI systems significantly impact their management. Eggers et al.(2019, p. 19) [^1] have identified three key reasons for failure in AI projects due to scalability issues:
- Pilots are designed narrowly, and thus more easily achieved, but do not have much of an impact on a wider audience. Impact is what generates buy-in.
- A pilot generates limited returns, despite considerable expenditure of financial and human capital. Stakeholders become reluctant to move it to implementation.
- Scaling AI pilots requires adapting new technologies and different ways of working, which some workers inevitably resist.
Eggers et al. (2019) [^1] introduces the concept of ‘pilot purgatory’, a strategy prioritising the development of initial pilot projects. These are designed to yield easily measurable, immediate results with substantial impacts on returns and savings. This approach is favoured over those producing more transformative outcomes in order to secure stakeholder support.
To summarise and conclude this section, the continuous supervision of AI deployment is crucial. It necessitates ongoing monitoring to ensure its performance aligns with intended objectives. Similar to an employee, it should exhibit learning and growth while progressively contributing more towards the organisation’s expansion to uphold its competitive edge.
This module has looked at the issues surrounding the implementation of AI in organisations. It has highlighted the importance of having a detailed and comprehensive strategy that address the issues of AI in a broad and inclusive manner, not just the technology. Both the Strategic Choice Cascade and the AI Ladder are valuable tools that can be used to create and implement an AI deployment.
[^1]: Eggers, W, Mendelson, T, Chew, B, & Kishnani, PK 2019, Crafting an AI strategy for government leaders, Deloitte Insights, viewed 29 July 2023, https://www2.deloitte.com/us/en/insights/industry/public-sector/ai-strategy-for-government-leaders.html.
[^2]: Fountaine, T, McCarthy, B, & Saleh, T 2018, Building the AI-Powered Organization, Harvard Business Review, viewed 29 July 2023, https://hbr.org/2019/07/building-the-ai-powered-organization.
[^3]: Manyika, J & Bughin, J 2018, AI problems and promises | McKinsey, McKinsey & Co, viewed 23 July 2023, https://www.mckinsey.com/featured-insights/artificial-intelligence/the-promise-and-challenge-of-the-age-of-artificial-intelligence.
[^4]: Ready, DA, Cohen, C, Kiron, D & Pring, B 2020, ‘The new leadership playbook for the digital age’, MIT Sloan Management Review, 21 January, viewed 25 May 2022, https://sloanreview.mit.edu/projects/the-new-leadership-playbook-for-the-digital-age/.
[^5]: Lafley, AG & Martin, R 2013, Playing to Win: How Strategy Really Works, Harvard Business Review Press, viewed 30 July 2023, https://typeset.io/papers/playing-to-win-how-strategy-really-works-esx4bhl0j8.
[^6]: Microsoft News Centre 2016, Democratizing AI, Stories, viewed 30 July 2023, https://news.microsoft.com/features/democratizing-ai/.
[^7]: Charlevoix G7 Summit 2018, Charlevoix Common Vision for the Future of Artificial Intelligence, viewed 30 July 2023, http://www.g7.utoronto.ca/summit/2018charlevoix/ai-commitment.html.
[^8]: Microsoft Corporation 2018, The future computed: artificial intelligence and its role in society / by Microsoft ; foreword by Brad Smith and Harry Shum, Microsoft, Redmond, Washington.