O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Artificial Intelligence

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
REPORT
Real leadership
Will artificial intelligence force us
to rethink business in the decades
to come, or will it fall short of all
the hype? H...
10 | IESE Business School Insight | Fall 2018
AI is the new IT.
It means lots of
different things to
different people
A
s ...
Anúncio
Anúncio
Anúncio

Confira estes a seguir

1 de 17 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a Artificial Intelligence (20)

Anúncio

Mais recentes (20)

Artificial Intelligence

  1. 1. REPORT Real leadership
  2. 2. Will artificial intelligence force us to rethink business in the decades to come, or will it fall short of all the hype? Here, a look at what executives need to know.
  3. 3. 10 | IESE Business School Insight | Fall 2018 AI is the new IT. It means lots of different things to different people A s artificial intelligence (AI) ex- pands the boundaries of what computers are capable of, it also puts new demands on execu- tives to understand these evolv- ing technologies, to find practical applications for them in their companies, and to resolve the unique ethical dilemmas arising from machines with ever more human qualities. For many, it’s a whole new world. While artificial intelligence may come to define business trans- formation in the 21st century, most people are un- able to describe what AI actually is, or to gauge its potential impact on their companies and society. “AI is the new IT. It means lots of different things to different people,” says Dario Gil, vice pres- ident for AI and quantum computing at IBM. AI in its most straightforward definition means the demonstration of intelligence by machines. The AI field includes machine learning, neural net- works and deep learning. But AI has also become an umbrella term for many other related subjects, including big data. “AI now is what Marvin Minsky, co-founder of the Massachusetts Institute of Technology’s AI labora- tory, used to call a ‘suitcase word,’ in the sense that it contains many different meanings. We need to move from this to the specifics,” says IESE profes- sor Javier Zamora. Strategy is not immune to the wave of change brought on by AI. In the short term, companies might find competitive advantage in being early adopters of AI. In the medium and long term, however, today’s cutting-edge technologies will become ubiquitous. The advantages of having access to data and algorithms will diminish. So if the heart of competitive advan- tage is strategic differentiation, how then will companies achieve it? The answer may lie in taking a step back from the technology and considering another resource: people. Algorithms excel at being hyper- rational: extrapolating and drawing logical conclusions to maximize efficiency. People who are rational thinkers are easy to manage and fit well into organizations. But hyper- rationality carries within it a profound problem when it comes to defining strategy. Firms that rely on the same or similar reasonable big-data analyses and algorithms as everyone else will struggle to differentiate themselves strategically. Human beings, with highly developed soft skills, can consider the emotional context and connections of strategic Competitive advantage in an AI world REPORT Artificial Intelligence, real leadership
  4. 4. Fall 2018 | IESE Business School Insight | 11 AI and its impact on management have been the focus of a number of forums at IESE in recent months, gathering industry executives and academics to consider what it all means for the future. Six decades of stops and starts While companies are investing more in AI than ever before, it is not a new concept. Artificial in- telligence was founded as an academic discipline in 1956, and business felt the first tremors of this seismic shift in technology in the 1950s, when early computers gave retailers the power to col- late sales data on an unprecedented scale. But AI has historically struggled to find practical ap- plications that live up to its promise. It did not work well enough to be useful for most companies until the last 10 years or so, when lower-cost computa- tional power and more sophisticated algorithms intersected with the vast data sets that could be har- vested from the internet and other sources. decisions. They can be contrary and ask difficult, unexpected questions that result in actionable insights. They have imagination and can make intuitive leaps that AI is, at best, many decades away from replicating. One of the key roles for human man- agers in helping companies differen- tiate themselves will be to go to the core of exploring what a company is – to consider its purpose and identity, to set its goals for the future. The human capacity for moral and spiritual discretion will be paramount to the sustainability of companies. Good managers also have the ability to make tradeoffs. They reconcile different points of view within a company and manage complex and unaligned stakeholder demands. They balance the need to make short-term profits with the need to take risks for longer-term growth, despite the uncertain payoffs. AI brings us to an inflection point in strategic decision-making for competitive advantage. Managing it successfully will depend on keeping a clear view of differentiation. We must harness the power of technological change without allowing ourselves to get swept away by its trends, precisely because everyone else is. Strategy differentiation will depend on how firms combine technology with their internal organizational capabili- ties and their ability to be more human – and sometimes unreasonable, emotional, insightful and creative – and not more machine-like. Bruno Cassiman Professor of Strategic Management and holder of the Nissan Chair of Corporate Strategy and International Competitiveness at IESE Virtual assistants like Siri, Google Assistant and Alexa are some of the user-friendly forms of AI
  5. 5. 12 | IESE Business School Insight | Fall 2018 Managing talent with data and humanity The most relevant competency for managers is to know what’s possible Marta Elvira Professor of Managing People in Organizations and Strategic Management, and holder of the Puig Chair of Global Leadership Development Companies are increasingly using artifi- cial intelligence to assist decision-mak- ing on HR strategies. AI tools open up new possibilities in understanding the strengths of a company’s workforce as well as tailoring its talent development practices to strategic goals. This is not a peripheral issue. Better performing companies are those that foster the adaptation abilities of their workforce, developing and compensating professionals who contribute to innovation. Firms’ long- term sustainability greatly depends on their competence in managing their high potentials and encouraging creativity through diversity. This means making sure that human capital strategies are prioritized on top management teams’ and boards’ agendas. AI can help us move past broad initia- tives, so we can focus on individuals. The ability to analyze data and iden- tify patterns will allow us to pinpoint with greater accuracy what each person is likely to be good at, based on current strengths. AI can also help managers make informed decisions on training. As data sets grow, algorithms will be able to detect what’s working across organi- zations and suggest how to implement best practices in ways tailored to individual needs. For employees, this should result in greater job satisfac- tion, learning and productivity. At the level of corporate governance, AI can help boards understand and offer better guidance on talent issues. For this to happen, board members must get informed about data and AI. And HR managers must get their data house in order. Most HR departments don’t gather nearly enough suitable infor- mation, and they frequently compound the problem with suboptimal database management. When companies do have databases that are large and well organized enough to deploy AI tools, they must then consider the objectives of those decisions. In recruitment, for example, if the focus is exclusively on reducing costs instead of identifying better hires, the long-term outcome could be worse for everyone. It is also imperative that companies guard against the lure of what Wharton’s Peter Cappelli calls “cool tools.” Today’s recruitment and people-management software tools tend to be strong on engi- neering and weak on psychology. They have been developed by people who, in the main, lack significant real-world management experience. The claims developers make for them may not lead to better real-world results. Fairness is another concern. Algorithms can be systematically biased, reflecting hiring managers’ biases regarding any- thing from gender to race to education. Biases risk being replicated at scale across a whole company. Human oversight of AI decisions will increase in importance as the power and ubiquity of the technology grows, requiring a people-first approach from managers with emotional intelligence and integrity. REPORT Artificial Intelligence, real leadership
  6. 6. Fall 2018 | IESE Business School Insight | 13 “What made possible the recent successes of ar- tificial intelligence and deep learning are basically two ingredients that were missing some years ago: access at relatively low cost to very high perfor- mance computing and of course the availability of lots of data,” says Ramón López de Mántaras, director of the Artificial Intelligence Research In- stitute at the Spanish National Research Council. “The concepts have been around for a long time. But actually AI right now is still in its infancy.” Nonetheless, some are beginning to view AI as a so-called general purpose technology, capable of transforming broad sectors of the economy, on par with developments such as the steam engine, elec- tricity and the internet. Beyond the large high-tech companies that have been early adopters, today AI is penetrating the automotive, telecommunica- tions and finance sectors. It is being used in retail and the media, and is seen as having enormous po- tential in areas such as healthcare and education. Julian Birkinshaw, a professor at London Business School, describes the growing ubiquity of AI by using the analogy of living in a mountainous world and watching the waters gradually rising. At pres- ent, he says, AI is about making existing process- es more efficient. We believe that we are on safe ground – that there are tasks only humans can do, requiring cognitive insights that AI will never reach. But he cautions against complacency. “The future,” he says, “will be very different.” Our own cultural attitudes may determine the success of this adaptation. Tomo Noda, founder of Shizenkan University in Japan, points out that Japanese media frequently depicts AI as friendly and here to help us, citing Doraemon and Astroboy as examples. He contrasts that with the dystopian visions presented by Hollywood in movies such as AI, 2001: A Space Odyssey and The Terminator. Noda urges us to reframe our engagement with the new technology by asking more positive questions: Can we do what we do better with AI? What new opportunities can we explore? How can we create a desirable society together with AI? From narrow to general and back Current AI technology is “narrow,” capable of work- ing at superhuman speed on specific tasks in specif- ic domains – such as language translation, speech transcription, object detection and face recognition – given good enough data. But we are already on the path to “broad” AI, which will be highly disruptive. To “teach” the algorithms that AI uses, humans still often need to label the first data sets them- selves (for example, by identifying objects in pho- tos). This is an expensive process and, as a con- sequence, most organizations don’t have enough labeled data. That is one of the reasons that many of the companies which have incorporated AI into their core businesses have been large, digital na- tives such as Google and Amazon. Broad AI will be able to transfer some learning across networks, or use reason to create its own labels, thus learning more from less data. Narrow AI needs a new neural network for each task but broad AI will multitask across multiple domains.
  7. 7. 14 | IESE Business School Insight | Fall 2018 Beyond broad AI, “general” AI, with full cross-do- main learning and autonomous reasoning, will be truly revolutionary. But machines capable of rea- soning and understanding the way humans do are decades away, and fears of “singularity” – the mo- ment when AI surpasses human intelligence and evolves beyond our control – are premature. Still, those concerns will one day have to be addressed. New management competencies And even narrow AI requires new knowledge and abilities of management. Studies have shown that AI is most successfully deployed in companies where it is prioritized by the highest-level execu- tives. Executives must have a basic understanding of data and its potential uses, to have a sense of which tasks are best handled by AI and which are best left to humans, and of how to enable their teams with these new technologies. That is not to say that executives must be data sci- entists orcomputerprogrammers. But theydo need to be able to ask the right questions. “The most rel- evant competency for managers is to know what’s possible,” says INSEAD Dean Ilian Mihov. “Algo- rithms will find patterns in the data but they can’t interpret them. Managers have to think about what might be happening. Then ask themselves how they can create an experiment to test if it’s true.” For IESE’s Zamora, putting AI into practice be- gins with a series of basic questions that exec- utives must ask themselves: Is AI relevant to the problem that I’m dealing with? Do I have enough high-quality data to train the system to produce solutions? Are there tools such as soft- ware and algorithms available that can be ap- plied to this problem? Can I trust the results of the algorithms? Data readiness must go beyond the management level. Creating a data-fluent corporate culture is essential. Nico Rose, vice president of employer branding and talent acquisition at Bertelsmann, recounts how his company hired brilliant data scientists from big tech companies to implement advanced AI solutions. But they had little success. Switching tack, Bertelsmann began focusing on building an organizational ecosystem that supports data readiness. Hiring is part of that, but so is con- tinuous education and learning for all employees. Today’s university students face the prospect of forging their entire careers amid the changes AI will bring – and they need to be prepared. Over 1,000 Stanford students enrolled in an introduction to machine learning class this past academic year. Professor Jeffrey Pfeffer explains: “They don’t want to be computer scientists; they just know it’s an important tool for everything.” The humans in the mix There are also limits. Wharton’s Peter Cappel- li calls the corporate world’s new obsession with data “the revenge of the engineers.” He says that management is in part to blame for this in areas such as human resources because it has devel- oped so many theories that have failed. But rather than turn people divisions over to data scientists, executives should reconsider how they manage, evaluate and motivate people. For the World Economic Forum’s Anne Marie En- gtoft Larsen, those skills are not all about technol- ogy. “I think there is a huge need for us to go back to some of these more social, human sciences, such as complex problem solving, critical think- ing, creativity, people management, coordination with others, and emotional intelligence.” And executives should reflect on what they do best and focus on how their strengths differ from those of AI. There are some areas of strategic de- cision-making where humans will always have an advantage. These includes injecting creativity, orig- inality and emotion into issues, and making con- cessions. The most insightful strategies, LBS’s Bir- kinshaw asserts, are often taken from metaphors from very different spheres. “That’s a very human quality,” he says, “to see a pattern in something REPORT Artificial Intelligence, real leadership
  8. 8. Fall 2018 | IESE Business School Insight | 15 “Without defining values we cannot deal effectively with problem solving” Tomo Noda / Shizenkan University “We need to have a discussion around what values are going to guide how we develop these technologies” THOUGHTS FROM AROUND THEGLOBE Anne Marie Engtoft Larsen World Economic Forum “If you look at the evidence of what creates long-term value for the firm, it is customer asset building and brand asset building, which require that holistic view. And so far, I have never seen a computer build a brand” Dominique Hanssens University of California, Los Angeles
  9. 9. 16 | IESE Business School Insight | Fall 2018 The next major phase in the evolution of artificial intelligence has already begun. Managers need to be aware of what’s possible, not only now but also around the corner, and the impli- cations it will have for how we work. Today’s AI falls mainly into the cate- gory of so-called “narrow” AI, with enough computing power to deliver superhuman accuracy and speed when performing single tasks in single domains. Hence, despite its raw pow- er, its applicability remains relatively limited. However, “broad” AI has the potential to be much more disruptive, once it is able to operate in a multi- task, multi-domain, multi-modal manner. In the near future, broad AI will impact risk modeling, prediction making and even strategic planning. In this world where AI can make faster, better-informed decisions than humans, what will the role of managers be? The answer to that can be found in some of AI’s inherent limitations, which even broad AI is unlikely to overcome. For all their power, algorithms don’t solve everything and will only ever be as good as the data fed into them. Self-reinforcing biases are a real risk, as was demonstrated by an MIT research project in 2018, which discovered that leading AI-powered Thinking outside the black box There is a huge need to go back to some of the more social, human sciences unrelated and link it back to theworld inwhichwe’re working. I can’t see how computers can do that.” He believes it is worthwhile to go back to basics and ask: What is a strategy? Citing American pro- fessor and author Michael Porter, he defines strat- egy in two dimensions: as choices that lead to actions, and – in the case of successful strategies – as being differentiated. A good strategy, in other words, is different to that of one’s competitors. If companies rely on similar data sources and algorithms for AI-supported strategic deci- sion-making, differentiation could be lost. Purely quantitative decisions can be too sterile; emo- tional conviction still adds value. “Planning, budgeting and organizing can be done by AI,” says Shizenkan’s Noda. “But establishing vision, aligning people and motivating them re- quires people.” Executives also must reflect on why firms exist, in order to clarify their own roles within them in a changing AI world. A key reason is to craft a sense of identity or purpose. Successful com- panies understand this and know who they are. A second reason firms exist is because they are good at managing complex tradeoffs, even over time and in the face of shareholder pressure. AI is not good at this and is unlikely to become so. REPORT Artificial Intelligence, real leadership
  10. 10. Fall 2018 | IESE Business School Insight | 17 face-recognition systems were “racist” – incorrectly determining the gender of dark-skinned females because the algorithms had been trained primarily with images of white males. Another system used by U.S. courts was found to be prone to misjudge the likely re- cidivism of black defendants because of biased data. Human managers will need to be trained to be on the alert for such potential biases in order to pro- vide a safeguard against blind accep- tance of AI-generated decisions. This “decision curation” is especially im- portant in the face of another AI charac- teristic – “black-box” decision-making. This is the idea that we can understand what goes into AI systems (data) and what comes out (useful information), but not what goes on inside. As AI becomes more sophisticated, the math it uses is becoming so complex that it is some- times beyond human comprehension – even for the people who wrote the algorithms. As a result, we can’t “show the working” and explain how AI reaches its conclusions. This has implications for transparency and accountability when the system’s output is used to make important – perhaps even life or death – decisions. Managers, therefore, must under- stand the context in which black-box AI decisions are to be implemented, and consider their consequences. To do this requires a (thus far) uniquely human quality: imagination. Compa- nies often pay lip service to the idea of thinking outside the box; in the future, this will be a literal necessity. The abil- ity to ask “what if?” – to look before leaping – is paramount to effectively and ethically harness the power of AI. AI over time Nor is it good at building processes for reconcil- ing diverse points of view. Bringing ethics to the table AI raises the spectre of machines that we create but that mayone daysurpass us, of computers that we program but that may take decisions we do not fully understand, of robots that assist us but that may make mistakes. All of this generates a host of new ethical quandaries, particularly around issues of fairness, accountability and transparency. The fairness debate, in part, centers around the danger of perpetuating biases. While data sets are ostensibly objective, they actually reflect in- grained biases and prejudices. If data is then fed into algorithms, how can we ensure that those bi- ases are not embedded into the future? Accountability has to do with deciding who is ulti- mately responsible for systems, particularly when something goes wrong. As machines become more autonomous, there are increasing questions about where responsibility lies among those who design and deploy the systems, and the systems themselves. Who is responsible if a cognitive ma- chine takes the wrong decision? Transparency and explainability become issues as AI-assisted decisions become less easy to under- stand. How can decisions be interpreted and under- stood if we don’t understand how they are arrived at? As AI algorithms become so complex that even their authors don’t fully understand them, how can we explain and justify the decisions they make? These are unique dilemmas, although ethicists also warn against exceptionalism in the AI debate. Previous ethical debates, such as those surround- ing medicine or nuclear energy, have much to contribute to the conversation. Sandra Sieber Professor and head of the Information Systems Department at IESE Government funding and AI programs with practical applications spark renewed interest in AI mid 1980s SECOND WAVE More powerful computers, better algorithms and more data cause AI to flourish mid 1990s to present THIRD WAVE Term “artificial intelligence” used for first time in 1956, amid a surge in interest in this new field mid 1950s to early 1970s FIRST WAVE
  11. 11. 18 | IESE Business School Insight | Fall 2018 For most companies, the ethical questions will re- late more to AI’s application than its technological development. Even narrow AI is affecting society at large, rapidly making some jobs and industries obsolete while creating new ones, which require completely new skill sets. While businesses adapt to the AI-fueled Fourth Industrial Revolution, millions of people around the world have failed to benefit from the advanc- es of the First, the Second, or the Third. The implications of AI go well beyond the business sphere, involving policymakers and governments. Stanford’s Pfeffer believes that governments lack the skills they need to manage the employment changes AI will bring. “Advanced, industrialized countries have done a bad job of managing the move from manufacturing to services,” he says. “They will struggle here too, especially with age- ing and sometimes shrinking populations, budget deficits and rising social costs for healthcare. How are they going to find the resources to reskill or upskill large numbers of people?” Reflecting on the possible political and social implications, he adds: “We have to do better this time.” Business leaders in the AI era will need to consider how their use of technology affects society. They will need to think of the human consequences and ethical dimensions of their AI-derived de- cision-making. And ultimately they will have to manage businesses by relying on what sets them apart from machines. REPORT Artificial Intelligence, real leadership
  12. 12. Fall 2018 | IESE Business School Insight | 19 Artificial intelligence promises to be a powerful driver of change in ways we cannot yet forecast. Even so, it’s worth considering how management should prepare their companies for some of the changes that we can already see. Data will be the most important com- modity of the future. As such, a top-to- bottom company culture of readiness – in terms of data governance, archi- tecture, traceability, accessibility and visualization – will be a prerequisite to survive and thrive in an AI world. In my research on top management teams, I have found that the ones that operate at peak performance are those that exhibit high cognitive flexibility – meaning they are open to new ideas, take a lot of different perspectives into account, and are able to change their opinions in light of incoming information. Cognitive flexibility increases the creativity with which information is interpreted and alternatives are generated. Top management teams may not need coding skills or deep technical knowledge of AI, but they do need a robust understanding of its possibili- ties in order to ask the right questions and consider all the angles, without prejudice or fear. Shaping an organi- zational culture that encourages this open, agile, multidisciplinary culture is crucial. Usually, any cultural change should start by educating those at the top. Again, my research has shown that what goes on in the C-suite has an influential effect on the rest of the organization. So, the more the top management team is able to model the behavior it seeks, the more mid- dle managers and other employees will start to align their own behavior and get excited about the possibilities of AI. In an attempt to be the change they seek, many leading companies are actively recruiting MBAs who are further along the AI learning curve than senior management, so their knowledge and enthusiasm can per- meate the entire organization. AI will increasingly automate business processes, which implies fewer people will be involved in operations. At the same time, AI will increasingly free talent from the constraints of location. Management structures will become much more fluid. Managers from any- where in the world can use AI to make better-informed decisions. Advances in automatic translation and virtual assistants will allow the brightest talent to manage more easily and effectively in international and multicultural settings, even if working remotely. AI will help geographically dispersed managers get the best from individual team members, wherev- er they are. Multidisciplinary teams in diverse locations will be linked by AI-derived best practices, tailored to their own needs. The more participa- tive, integrative and cooperative the leadership is, the more motivational it is for employees. For all AI’s disrup- tive effects, it’s worth sticking to the foundations upon which successful organizational cultures are built. Data readiness starts at the top Anneloes Raes Professor of Managing People in Organizations at IESE
  13. 13. 20 | IESE Business School Insight | Fall 2018 THEKEY WORDS 04MACHINE LEARNING When computers are able to detect patterns and make predictions and recommendations (prescriptive) based on systems that learn from experience (data) Artificial intelligence in recent years has found some of the practical business applications that had eluded it for decades. But while the successes are many and growing, most experts agree that AI is decades away from reaching human-level intelligence – if, indeed, that will ever be reached. Here, a primer of key AI concepts from Javier Zamora, senior lecturer in IESE’s Department of Information Systems. 03NARROW ARTIFICIAL INTELLIGENCE Artificial intelligence that is applied to a single, well-defined task such as language translation or facial recognition 02SINGULARITY The possible moment when machines, powered by artificial intelligence, will be able to surpass all human intelligence 01GENERAL ARTIFICIAL INTELLIGENCE The theoretical potential of artificial intelligence to simulate human-level cognition, taking complex decisions in varying contexts REPORT Artificial Intelligence, real leadership
  14. 14. 12NATURAL LANGUAGE PROCESSING The part of AI dedicated to making computers understand and process human language 09TRANSFER LEARNING A focus on storing knowledge gained while solving one problem, and applying it to a different but related problem 08REINFORCED LEARNING When the machine is told only the goal of a given task, and through trial and error tries to maximize that goal 07BLACK BOX The ability of computers to reach conclusions in ways that humans cannot necessarily explain or understand 06DEEP (and SHALLOW) LEARNING Deep learning involves neural networks with many layers, while shallow learning involves few layers 05NEURAL NETWORKS A set of nodes connected in layers, in which each layer provides input to the next through forward propagation and back propagation 10SUPERVISED (and UNSUPERVISED) LEARNING The two main types of machine learning algorithms. With supervised learning, data scientists must label data and guide outputs. With unsupervised learning, the computer identifies patterns on its own 11ROBOTICS A branch of technology dealing with devices – often robots – that can sense and react to their surroundings
  15. 15. It is imperative for leaders to separate AI reality from hype Darío Gil Chief operating officer of IBM Research and vice president of AI and quantum computing at the technology giant. PhD in Electrical Engineering and Computer Science from MIT. REPORT Artificial Intelligence, real leadership
  16. 16. Fall 2018 | IESE Business School Insight | 23 A rtificial intelligence (AI) is evolving faster by the day and its application impacts almost every industry. But it also affects the way corporations work. IBM’s Darío Gil knows firsthand how this technology is creating new challenges for leadership and management. Here, Gil talks about the present and future of AI as well as his vision of quantum computing. What do you mean when you say that AI is the new IT? As happened with IT, many companies are viewing the cre- ation and adoption of AI as an essential part of their suc- cess. There are parallels in today’s efforts with the previ- ous IT experiences: corporations are creating internal AI departments and aggressively recruiting AI experts, trying to determine how this technology might be used. Further- more, the hype around AI is increasing every day. The term has become a buzzword which is replacing other tech terms such as IT, automation and data analysis; many times, in ways that aren’t really about AI. There are usually two visions of AI: a utopia of machines improving our world and a dystopia where humans are superfluous. Which is the real one? I would say that neither of these is the reality of AI. We have seen its rise being accompanied by both these perspectives: boundless enthusiasm for its ability to transform our lives, and fear that it could potentially harm and displace us. At IBM, we are guided by the use of artificial intelligence to aug- ment human work and decision-making. Our aim is to build practical AI applications that assist people with well-defined tasks. There’s no question that the arrival of artificial intelli- gence will impact certain jobs, but we believe people working collaboratively with AI systems is the future of expertise. Then how will AI change employment? History suggests that, even in the face of technological trans- formation, employment continues to grow. It is the tasks that are automated and reorganized where the transforma- tion occurs, and workers will need new skills for these trans- formed tasks. Some new jobs will emerge: “trainers” will be required to teach AI systems how to perform, and another category of “explainers” will be needed to help convey how these intelligent devices have arrived at a decision. In some areas, a shortage of qualified professionals may exist and the emergence of AI systems further highlights the need for in- dividuals such as cybersecurity professionals. Do we need to understand algorithms to be able to work with them? I don’t believe that it is mandatory for every AI system to be understandable at a very precise level, but I expect us to reach a foundation of trust through our testing methodol- ogies and experiences. In some cases, users of AI systems, such as doctors and clinicians, will need to justify recom- mendations produced by one of these systems. How can business leaders adapt to the new AI landscape? It is imperative for CEOs, senior leaders and managers to become knowledgeable about AI, to separate reality from hype and to start developing a strategy on how they might use AI systems within their business. The nature of lead- ership will likely change in many aspects of the “hard” el- ements, such as data-driven decision-making, and leaders will need to embrace “softer” aspects of leadership, such as credit: IBM
  17. 17. 24 | IESE Business School Insight | Fall 2018 helping their employees work together. They will also have to identify ways to enable the partnership between their workers and the AI systems. What organizational challenges will companies face? One major challenge is determining whether their compa- ny will drive the AI strategy and capabilities internally, or whether they will work with external organizations to get support. Many considerations impact this decision, includ- ing the ability of the company to attract, develop and retain key AI talent. I also believe that it is critical for companies to have a senior leader focused on AI strategy and execution for their business. It could be achieved through the creation of a new role such as the chief AI officer, but it could also become part of the responsibilities of existing leaders, like the chief technology officer or the chief data officer. How do you envision the impact of quantum technologies? IBM is focused on advancing quantum information science and looking for practical applications for quantum comput- ing. In September 2017, our team pioneered a new approach to simulate chemistry on our quantum computers. Now, we expect the first promising applications: the discovery of new algorithms that can run on quantum computers and untangle the complexity of molecular and chemical interactions, tackle complex optimization problems and address artificial intelli- gence challenges. These advances could open the door to new scientific achievements in areas such as material design, drug discovery, portfolio optimization, scenario analysis, logistics and machine learning. What other steps have to be taken in the coming years regarding AI and quantum technologies? Business leaders and managers must take the first steps in being informed about AI, quantum computing and the po- tential impacts and benefits to their companies. We have to move from a narrow to a broader form of artificial intelligence that enables the application of AI systems from one task to another. Leaders must begin to define a strategy for the devel- opment and use of it for their specific business, and then their style of leadership will evolve to embrace the value that this brings in decision-making. For quantum computing, we en- courage organizational leaders to become “quantum ready”: to get educated on quantum computing, identify applicable use cases and explore the development of algorithms that may lead to demonstrations of quantum advantage. “Business leaders and managers must take the first steps in being informed about AI and quantum computing” Creating the quantum computers of tomorrow. An IBM cryostat wired for a 50 qubit system. credit: IBM C M Y CM MY CY CMY K REPORT Artificial Intelligence, real leadership

×