What Algorithmic Fairness Can Learn From City Planners
A new paper argues AI fairness is a wicked problem. Learn why data science should borrow from urban planning to address power, conflict, and governance.
A new paper out of Cornell makes an argument that should land awkwardly for anyone running a responsible-AI programme in 2026: the problems you are wrestling with have already been wrestled with, for fifty years, by people who design cities.
The authors, working at the intersection of computer science and policy, describe algorithmic fairness as a "wicked problem". That is technical language. The term comes from planning theory and describes issues that are tangled, value-laden, and impossible to solve cleanly because the people affected disagree on what "solved" even means. Their proposal is that data science should stop trying to define fairness in equations alone, and start borrowing from urban planning's tradition of critical pragmatism: a reflective, deliberative approach that takes power and conflict as the starting point rather than the inconvenience.
If your responsible-AI work is still measured by bias scores on a test set, this should sit uncomfortably. A bias score tells you whether your model behaves consistently across groups on a benchmark dataset. It tells you nothing about who was in the room when the model was specified, who can challenge an output that affects their mortgage or their child's school, or what happens when conditions shift in production and the original assumptions stop holding.
The paper applies its framework to three case studies that travel well into a corporate context: automated mortgage lending, school choice, and feminicide counterdata collection. The first two are obvious analogues for any HR, lending, claims, or eligibility decision system. The third is more interesting. Counterdata is data collected by communities to document harm that official systems are missing. It is a useful reminder that the absence of a signal is itself a design choice, and that the people most affected by a system are often the last to be consulted about how it works.
This is where the urban planning analogy stops being clever and starts being useful. City planners learned, painfully, that you cannot deliver good outcomes by optimising for a single metric. Traffic flow improves at the cost of neighbourhood cohesion. Housing density rises at the cost of green space. The job is to surface the trade-off, name the parties who will live with it, and design a process that holds power accountable when the model meets reality.
For a Chief Data Officer or Head of Learning, the practical questions shift. Less "is our model fair on the test set", and more: who specified the problem, and whose definition of success is encoded in the loss function? Who can contest a decision, and how quickly does that challenge reach someone with the authority to change the model rather than the customer service script? When the system drifts, who notices first, and is that person on your payroll or theirs?
These are governance questions wearing a technical disguise. Most organisations I work with have not yet built the muscle for them, because the previous generation of compliance work was about static rules applied to static processes. AI changes the shape of the problem. The model keeps learning. The world keeps shifting. The people affected keep changing. A 2022 governance playbook, written for a one-off audit, will not survive contact with that. If you want a sense of where your team actually sits on the shift, the change curve assessment is a reasonable place to start the conversation.
The deeper point in the paper is one I keep returning to in client work. Fairness is not a property of a model. It is a property of the system the model sits inside, including the humans who specified it, the humans who deploy it, the humans who can override it, and the humans who have to live with what it produces. Optimising the model alone is like fixing the traffic lights and calling it urban renewal.
One thing to try this week: pull the most recent decision your largest AI system made about a real person, and trace who could have stopped it, who could have challenged it, and who would have noticed if it was wrong. If the answer to all three is the same name, you have a single point of failure dressed up as a process.

