Insight and analysis on the information technology space from industry thought leaders.
Decision Intelligence: The Ethical Dilemma
Here's how to ensure that decision intelligence will be a force for good and not a source of harm now and in the future.
January 3, 2024
Imagine a world where machines can make better decisions than humans. A world where artificial intelligence (AI) can optimize every aspect of our lives, from business to health to education. Sounds like a utopian dream, right? Well, it might not be as far away as we might think.
In the last months of 2023, we witnessed a "tectonic" shift in the AI industry (as defined by CNN), with unprecedented events and controversies. The main reason behind this shift was the internal conflict at OpenAI, one of the leading AI research organizations in the world, known for the creation of the generative AI tool ChatGPT. This conflict revealed two different visions on AI's future: one that aims to create an ethical AI, and one that only seeks to unleash the full potential of AI. What happened led to a wider discussion about the ethical and political problems of intelligent technologies.
In this article, we will explore the advent of decision intelligence. We will also examine the ethical dilemma that arises from having machines supporting decision-making and becoming more autonomous and the need for governance, risk, and compliance (GRC) in this emerging business domain.
The Advent of Decision Intelligence
The new capabilities introduced by data management and artificial intelligence created an increased interest in decision intelligence — a new, multidisciplinary field that combines techniques from various areas, including data science, artificial intelligence, machine learning, behavioral science, and management.
The main objective of decision intelligence is to improve the quality of decision-making by using AI to augment human judgment. The positive effects on business outcomes are evident: Decision intelligence can help companies solve complex problems, optimize processes, enhance performance, and achieve desired goals. For example, decision intelligence can help improve warehouse management by using AI to predict demand, optimize inventory, and coordinate better logistics.
It can also help improve talent management by using AI to assess employees' personal attitudes and values, match people talents to careers, and maximize employees' engagement.
Every business application can be enhanced and become an "intelligent" application with AI. Decision intelligence can create a significant competitive advantage for early adopters, which will benefit from it in the long run against companies that will embrace AI later.
The Ethical Dilemma
However, decision Intelligence does not come without challenges and risks. The fact of having technological super brains supporting and influencing decision-making up to a level where we see machines making decisions more and more in autonomy introduces an ethical dilemma that has been hugely addressed in science fiction movies.
Even in real life, we all remember the discussions around self-driving cars: What should a self-driving car do when it can't avoid a crash? For example, should it try to save a person walking on the road, even if it hurts the people inside the car? Or should it keep the people inside the car safe, even if it hits a group of kids? These scenarios raised questions about the moral and legal implications of having machines that can act autonomously, and the potential impact on human safety, autonomy, and accountability.
The same questions emerge when we think about business scenarios. The debate has intensified recently following the so-called "OpenAI's fiasco": In less than 60 hours, OpenAI's CEO, Sam Altman, was fired and then rehired, mainly due to a divergence of opinions in the company's board.
On one side, we find the Effective Altruists, who think of defining an ethical AI; on the other, Idealists, who think it's inevitable to unleash the full potential of AI. Sources reported that the reason of the conflict at OpenAI was a new project called Q*, which is a new AI model that could be a precursor of artificial general intelligence (AGI). Unquestionably, the more we allow machines to make decisions by themselves, the more we need to be aware of the risks: Science fiction movies could soon turn into reality.
Need for Governance, Risk & Compliance
Whatever happens in the next months and years will set the future of humanity and create the foundation of a new era. We will see more and more governments and politicians trying to regulate AI usage. However, we will most likely need some shared international legislation to prevent risks to humankind.
With the rise of decision intelligence, business applications and platforms need embedded tools capable of assessing AI risks, applying compliance rules, and automatically governing the feasibility of decisions made by applications and systems. New GRC tools need to be aligned with company values and the new legislations. Companies must be prepared for the future and partner with technology providers that have a clear strategy around AI governance in accordance with new laws and regulations.
With 2024 now here, it becomes evident that the intersection of technology and ethics will define the narrative of our near future. All leaders and decision-makers in IT, politics, and human studies, and futurists, including CIOs, CEOs, and CTOs, have the responsibility of shaping a future where AI augments human potential without compromising our ethical foundations. And technology providers need to adapt to the changing landscape of AI by offering solutions that are transparent, compliant, and reliable. Only by doing so, we can ensure that decision intelligence will be a force for good and not a source of harm for ourselves and the next generations.
Gessica Chies is Solution Consulting Manager at Infor.
About the Author
You May Also Like