New AI Glossary aims to create a common terminology for researchers and practitioners
A new glossary of AI terms and definitions has been launched by the London AI and Humanity Project, an interdisciplinary collaboration based at the Institute of Philosophy.
The glossary, which can now be viewed online, is in its initial stages, and is expected to grow and develop throughout the year with greater engagement and input from the academic, research and AI community.
AI has and continues to raise complicated ethical and policy questions. The glossary aims to create a common terminology for artificial intelligence that can be used as a reference point for researchers and practitioners, lawyers, policymakers, industry, philosophers, media and the public.
The London AI and Humanity Project, which is based at the Institute of Philosophy, brings together philosophers, neuroscientists, psychologists, researchers and industry leaders to explore the relationship of AI to humanity, and what it means to be human.
In partnership with the AI&Humanity Lab in Hong Kong, the Project has convened events and conferences, including ‘Being Human in the Age of AI’ last Summer, which featured a keynote address from Matt Clifford, now adviser to the UK government on its AI strategy.
Speaking about the origins of the glossary, Alex Grzankowski, Associate Director of the Institute of Philosophy, said:
“Thoughts of a glossary first started when the Institute of philosophy held its first large public AI conference called ‘Chat GPT and Other Creative Rivals.’
“The event was a huge success featuring talks from Sir Nigel Shadbolt from Oxford, Jeff Hinton who recently has been awarded the Nobel prize and Gary Marcus from NYU.
“One of our speakers at the event from Google DeepMind named Jackie Kay brought to our attention the importance of philosophical clarity for creating good benchmarks for AI.
“On the back of that we created a benchmarking group with various philosophers and people from industry and started having regular meetings to discuss benchmarking and philosophical clarity. This later lead to the recognition that what we really need is some kind of guidebook or glossary to help facilitate clear conversation and to avoid talking past one another. That's how the glossary got started.”
Speaking about the Institute’s ambitions for the glossary, Alex said:
“We want the glossary to continue to grow and to become a key resource for researchers, policy makers, journalists, and beyond when they need to appeal to terminology that can easily invite confusion such as rationality or consciousness or intelligence.
“There is rightfully a lot of excitement around AI especially in the last two years since GPT became publicly available. Alongside this excitement there has been an awful lot of hype but also naysaying about the risks and threats and opportunities of the systems.
“A lot of those discussions and disputes revolve around very contested issues concerning whether a chatbot like GPT Is intelligent or rational or conscious. Some people have worried that we are on our way to creating a super intelligence or that we run the risk of creating future agents that can suffer and that we might treat poorly.
“Without getting too far down the speculative rabbit hole I think that it's crucial that these discussions that bring in these contested philosophical notions are treated with care and patience. A shared vocabulary can go a long way towards advancing debate responsibly.”
Find out more about the glossary via the website.
This page was last updated on 15 January 2025