1. Home
  2. Profile
  3. Research
  4. Publications
  5. Etcetera

Research Interests:

Current and former PhD students:

Dr Panos Dafas, Dr Ren Lu, Dr Rafael Borges, Dr Chris Child, Dr Michael Fairbank, Dr Alan Perotti, Dr Son Tran, Dr Srikanth Cherla, Dr Kelsey Guo, Andreas Jansson, Hazrat Ali, Avelino Forechi, Andre Luzardo, Marios Prasinos, Rahman Al-Arif, Daniel Philps, Odunayo Fadahunsi, Asmaa Mahdi, Kaiomurz Motawara, Simon Odense, Calogero Lauricella, Radha Kopparti, Charitos Charitou.

Invited Talks and Seminars:

Argumentation Neural Networks. Invited talk at Mallow'09 Argumentation Day, Educatorio della Provvidenza, Turin, Italy, September 2009.
Neurons and Symbols: A Manifesto, Dagstuhl Seminar on Learning Paradigms in Dynamic Environments. Wadern, Germany, July 2010.
Robust Logic Learning in Neural Networks, Invited talk at the Lab for Adaptive Systems, University of Luxembourg, July 2010.
Neural-Symbolic Computation, ICCL Summer School on Cognitive Science, Logic and Connectionism, TU Dresden, Germany, August 2010.
Neural-Symbolic Systems for Cognitive Reasoning, Invited talk at the School of Computing, University of Kent, UK, June 2011.
Neural-Symbolic Computation, The Future of Machine Learning Panel, Imperial College London, UK, September 2011.
Neural-Symbolic Systems for Cognitive Reasoning, Invited talk at the Informatics Colloquium, King's College London, October 2011.
Neural-Symbolic Systems for Cognitive Reasoning, Invited talk at the School of Computing, University of Leeds, UK, October 2011.
Neural-Symbolic Systems for Verification and Adaptation, Invited talk at the Dept. of Computing, Imperial College London, UK, March 2012.
Fast Relational Learning using Neural-Symbolic Systems, Invited talk at the Dept. of Computing, Imperial College London, UK, November 2013.
Neural-Symbolic Computing, Deep Logic Networks and Applications. Schloss Dagstuhl Seminar 14381, Wadern, Germany, September 2014.
Neural-Symbolic Learning and Reasoning: Contributions and Challenges. AAAI Spring Symposium, Stanford University, Palo Alto, March 2015. Relational Knowledge Extraction from Neural Networks. Talk at NIPS Cognitive Computing workshop, Montreal, Canada, December 2015.
Neural-Symbolic Systems for Verification, Run-time Monitoring and Learning. Dept. of Computer Science, University of Oxford, UK, March 2017.
Neural-Symbolic Systems for Human-Like Computing. Dagstuhl Seminar on Human-Like Computing, Schloss Dagstuhl, Wadern, Germany, May 2017.
Avoiding Deep Horses: Finding Structure in Deep Networks. Invited talk at Horse 2017, Queen Mary University of London, September 2017.
Neurosymbolic Computation: Thinking Beyond Deep Learning. Invited talk, Department of Computing, Imperial College London, November 2017.
On the Need for Knowledge Extraction from Deep Networks. Invited talk, Data Science Institute, Imperial College London, February 2018.
Thinking Beyond Deep Learning? Neurosymbolic Computing. Invited talk, Cognitive Computation Symposium, City, University of London, February 2018.

What is neural-symbolic computation?

Humans are constantly learning and reasoning in order to make decisions as part of a permanent cycle of knowledge acquisition. In this process, learning and reasoning are almost indistinguishable. However, they are typically studied separately in Artificial Intelligence, leading to computational systems where the emphasis is on either aspect of cognition, but not on the interplay between them.

Our research work brings the modelling of learning and reasoning together with the use of neural-symbolic integration. Neural-symbolic systems can produce better models of knowledge acquisition, robust learning and reasoning under uncertainty. In the longer term, it is expected that they will offer a better understanding of such fundamental phenomena of cognition.

Neural-symbolic systems integrate logical reasoning and statistical learning by offering sound translation algorithms between network and logic models. They contain three main components: (1) knowledge representation and reasoning in neural networks, (2) knowledge evolution and network learning, and (3) knowledge extraction from trained networks.

In a neural-symbolic system, neural networks provide the machinery for efficient computation and robust learning, while logic provides high-level representations, reasoning and explanation capabilities to the network models, promoting modularity, facilitating validation and maintenance and enabling a better interaction with existing systems.

Neural-symbolic systems have important applications in diverse areas such as bioinformatics, fraud prevention, assessment and training in simulators, cognitive robotics, general game playing, image, audio and video classification, software verification and the semantic web.

In a nutshell, neural-symbolic systems seek to benefit from the knowledge representation and reasoning capacities of applied logic, and the learning capacities of neural networks, enabling effective learning from noisy data and online reasoning about what has been learned.