Ethical Decision Making in the Age of AI

It is critical that leaders look at not only what AI can do, but what it should do.

Minute Read
Minute Read
Minute Listening
Minute Reading
Minute Viewing
Series One
Episode
Article
17

Paul Gordon

Founding Director

Paul Gordon is a highly accomplished business leader with over fifteen years’ experience leading decision-making transformation. He has facilitated key decision points for major programmes and strategic planning initiatives across a wide range of sectors, including extensively with the Defence and Security sectors in Australia, New Zealand, and the UK.

Watch Recording NOW!

We are all using AI. To interpret data, generate images, summarise text and automate workflow. But should we use it to make decisions –and not just which restaurant for dinner? And if so, how does AI work in the morally grey areas we normally overcome with ethics?  Also, as agentic AI becomes more prevalent, if we use AI for decisions, our AI agents may then take action on our behalf –perhaps OK for booking the restaurant, but maybe no so for hiring our next team member(yet)?

As of January 2025, 49% of Australians report using generative artificial intelligence (AI). Public sentiment is shifting accordingly, with 52% of the population believing that AI will benefit them personally. Yet, as the technology becomes more integrated into both personal and organisational decision-making processes, it is critical that leaders look at not only what AI can do, but what it should do.

The right tool for the job?

It’s clear that artificial intelligence offers unprecedented analytical capability and speed. It can retrieve and synthesise vast amounts of information, identify patterns, generate options, and simulate consequences. In many contexts, these abilities can enhance the quality and efficiency of decision-making. However, the question is not merely whether AI can be used indecisions, but whether its use is ‘requisite’, sufficient and appropriate for the decision being made.

Overuse of AI, particularly for simple or low-impact decisions, may be overkill and can carry hidden costs and impacts, such as environmental burden and resource consumption. Ethically we need to consider the trade-offs of using AI, such as energy when determining if it is requisite for our decision.

If the outcome of a relatively straightforward decision could be arrived at with minimal effort through human judgement, using are source-heavy AI model may be unjustifiable and overkill.  And a side-effect may be that over-use for this kind of decisions has us humans become less adept at our own decision-making, and, critically, less able to validate more significant decisions supported by AI.

Who owns the decision if a robot made it?

Another, perhaps more insidious ethical concern arising fromthe use of AI in decision making, is the potential erosion of ownership.

We know that actively having stakeholders participate in organisational decision making significantly improves the robustness of decisions - when stakeholders co-create the decision process, frame the problem, evaluate alternatives, and reflect on consequences, they develop a sense of commitment to the outcome. This ownership strengthens not only the perceived legitimacy of the decision but also its implementation, durability and resilience – its‘ stickiness’.

The use of AI can impact this outcome. If a decision is thought to have been made or unduly influenced by a ‘faceless’ algorithm, stakeholders can feel that their agency has been diminished. Even when the decision outcome is sound, the perceived shift in authorship, from human to machine, can undermine trust and reduce long-term adherence.

Moral reasoning but not moral agency.

In decisions involving ethical trade-offs, there’s another layer of complexity. Ethical decision-making involves deliberation about values, rights, duties, and the implications of actions on individuals and society.

Humans use ethical frameworks to provide robust ways of analysing moral dilemmas and trade-offs. AI systems, regardless of how advanced, cannot authentically engage with these frameworks. They can simulate moral reasoning by incorporating ethical principles into their prompts or training data, but they do not possess moral agency. A language model might construct a coherent utilitarian argument or simulate reasoning, but it does so by identifying statistically plausible text patterns, not through genuine deliberation, intention, or moral understanding. The appearance of ethical reasoning must not be mistaken for its substance.

This risk can also have a more profound consequence – if people believe the decision has been made with true ethical reasoning(regardless of whether it has or not), they may have false confidence in the decision outcome, which may reinforce unethical beliefs and behaviours.  This challenge applies across all our uses ofAI in decision making -  just because anAI has given us an answer does not mean it is sound.

Furthermore, decisions grounded in ethical principles almost always depend not only on abstract reasoning but on context, empathy, and lived experience. For example, determining whether to implement a policy that disproportionately affects a vulnerable population requires sensitivity to historical inequities, cultural nuance, and human emotion. These are not values that can be effectively codified or quantified into a model.

Where do we go from here?

It would be rather naive (let alone impractical) to suggest that AI has no place in ethical decision-making. On the contrary, when used appropriately, it can serve as a powerful decision support tool: surfacing previously unseen risks, modelling potential consequences, or even identifying cognitive biases.

The challenge for us lies in deploying AI in ways that areproportionate to the task, aligned with ethical standards, and transparent intheir influence. To that end, organisations and individuals must remain vigilant.

Questions must be asked not only about what outcomes are desirable, but also about the processes by which those outcomes are reached.

·      Who is responsible for the decision?

·      What values are at stake? Is the method of reasoning clear and defensible?

·      Is the use of AI adding genuine value, or is it merely a convenience with unintended ethical and environmental impacts?

These are increasingly practical concerns, with real-world implications for public trust, organisational legitimacy, and social cohesion, as well as our own experience of self-worth. The integration of AI into decision-making should be deliberate, context-sensitive, and ethically informed, because ultimately: machines don’t make decisions – humans do.

So what can we do?

One of the most helpful things we can do is to improve our own understanding and engagement with deliberate, ethical decision-making.  If we are to make the most of the opportunity that AI presents to support our decision-making, we need to be confident in how decisions are made, processes used, what factors are at play and the trade-offs made.  How?

·     Undertake a review of current organisational decision making practices and processes and governance 

·     Build capability across your organisation to understand process, frameworks and principles.

Then bring your own lens to what inputs AI gives you. Mastering our own decision making will help us master AI and not live in fear of use being the slave to AI.

If this article resonates with your challenge, let us help you sort it out!
If you could hear any of your challenges in this podcast, we can help!
Want to engage with our Solution Partners, schedule a call now!
Wanting to achieve similar success across your organisation?
Did this recording echo challenges you may be experiencing in your workplace, let us help!
Start the Conversation!

Problem Spaces:

Transformation
Performance
People

Potential Solutions:

Capability Development
Business Strategy
Cultural Transformation
Executive Decision Making
Check out other posts that may interest you!

A community driving collaboration, innovation & growth through the Future@Work

Something went wrong, please try again!
Schedule a Call Today!

As a community of like-minded clients and solution partners, we're driven to improve the workforce so every organisation and their employees can thrive.

Leonie Rothwell and Marcus Worrall co-founders of Sprouta.