Artificial intelligence now shapes decisions about jobs, loans, healthcare, and education. When those systems carry bias, the consequences are real for the people affected. One of the biggest obstacles to fixing that problem is surprisingly basic: everyone uses different language to describe it. Researchers, policymakers, auditors, and educators often mean the same thing but talk past each other, which makes serious oversight harder than it needs to be.
A working paper I have just released proposes a practical solution. The paper introduces the Algorithmic Bias Ontology (ABO), a structured vocabulary built to give the field a shared language for identifying and discussing bias in AI systems. The first version includes 60 defined concepts organized into nine categories that follow the full lifecycle of an AI system, from data collection through model design, deployment, and governance. The goal is straightforward: when people talk about bias, they should be talking about the same thing.
Natural Language Processing tools make it possible to go further than a static glossary. Rather than freezing the vocabulary in place, NLP methods can continuously scan new research and expand the ontology as the field evolves, letting the language around AI bias grow alongside the technology itself.
This work has been in development for roughly eight years and is now publicly available as a preprint. If you are working in AI development, policy, education, or governance, I hope you will read it, cite it if it is useful, and help improve it over time. You can find it now on ResearchGate.