top of page

Decision Design: Why High-Level Decisions Fail Without Structured Judgment

  • 1 day ago
  • 3 min read

By Ryoji Morii


I used to think that better data would lead to better decisions.


In many cases, it does. But over time, I kept encountering situations where that assumption simply did not hold. The analysis was solid. The options were clear. And still, the decision either stalled or moved forward in a way that felt… off.


It took me a while to understand what was happening.


The issue was not the quality of thinking. It was the absence of a clear structure for judgment. No one had explicitly defined who was meant to decide, or what conditions would justify stepping in and changing course.


That gap tends to stay hidden when organizations are small. People talk, align informally, and decisions get made. As things scale, that implicit coordination starts to break down. Decisions stretch across teams, systems, and now increasingly, AI-driven processes.


At that point, something shifts. Decision-making is no longer just a cognitive activity. It becomes an architectural problem.


This is the starting point of what I call Decision Design.


The idea is relatively simple, but not always easy to apply. Before focusing on how a decision is made, define who should hold the authority to make it. 


Not in general terms, but in concrete, operational terms.


Where does judgment sit? When does it move? Under what conditions?


These questions often lead to uncomfortable realizations. Authority is frequently assumed rather than designed. It follows hierarchy in some cases, urgency in others, and occasionally just habit. That works until multiple actors—human or system—start interacting in ways that were never anticipated.


One way to make this visible is to look at what I refer to as a Decision Boundary.


This boundary is not a formal artifact in most organizations, even though it exists in practice. It is the point where judgment shifts. Sometimes quietly. Sometimes too late.


Between a system and a human reviewer. Between a delivery team and an executive sponsor. Between “this is fine” and “we need to stop.”


What matters is not only where that boundary is, but whether anyone has actually defined it.


When it has not been defined, familiar patterns appear. Decisions linger because accountability is unclear. Or they move too quickly, because someone assumed they were supposed to decide.


Risk behaves in a similar way.


Most organizations try to capture risk in lists. In reality, risk tends to show up at the edges—when assumptions stop holding, when ownership changes, when a process crosses into a different context. Looking at those moments is often more useful than reviewing a static register.


Execution adds another layer of complexity. Plans are usually precise at the beginning. Less attention is given to how decisions will be made when those plans start to drift. And they always drift.


Without some form of predefined triggers or decision rights, organizations end up oscillating between escalation and hesitation.


None of this is entirely new. But it becomes more pronounced when AI systems are involved.


Once a system is making or influencing decisions, the question changes slightly. It is no longer only whether the outcome is correct. It is whether the system should have been making that decision in the first place. And if not, where that boundary should have been drawn.


I do not see Decision Design as a replacement for existing approaches like Choice Architecture. They operate at different layers. One focuses on how choices are structured. The other focuses on who holds the authority to choose.


The distinction sounds subtle. In practice, it is not.


Many decision failures are not failures of intelligence or intent. They are failures of structure.


And structure, unlike intuition, can be designed.


Connect With Ryoji


 
 
 

Comments


bottom of page