1. Home
  2. Profile
  3. Research
  4. Publications
  5. Etcetera


  • Our paper Quaternion Anti-Transfer Learning for Speech Emotion Recognition has been accepted for WASPAA 2023.
  • Our paper TREPAN Reloaded: A Knowledge-Driven Approach to Explaining Black-Box Models has won a distinguished paper award at at ECAI.
  • Invited talk on Music and Big Data Analysis at Interactive Music Technologies Network 2020 23 January 2020.
  • Two workshop papers at NeurIPS 2019:
  • Our paper at the Bio Imaging workshop at CVPR 2019: Laibacher, Weyde & Jalali: M2U-Net: Effective and Efficient Retinal Vessel Segmentation for Resource-Constrained Environments. New State of the Art for high resolution optical blood vessel segmentation.
  • We won the OMG empathy detection challenge with our friends at Alpha AI. The results are here.
  • New paper on arXiv with Craig Macartney. New State of the Art in Speech Denoising.
  • Two workshop papers at NeurIPS 2018:
  • We received the Best Poster Award for our ISMIR paper on Singing Voice Separation with Deep U-Nets. New State of the Art in Signing Voice Extraction.
  • Short Research Bio

    I am a Senior Lecturer in the Department of Computer Science, head of the Machine Intelligence and Media Informatics Research Group and a member of the Machine Learning Group, and Senior Tutor for Research. I work on machine learning and signal processing methods for data analysis with applications in finance, audio, NLP, music, health, security and education. My latest research focuses on creating inductive biases in neural networks for rule-learning, extrapolation, generalisation, and interpretability.

    Before I joined City I was a researcher and coordinator of the MUSITECH project at the Research Department of Music and Media Technology at the University of Osnabrück. I hold degrees in Computer Science, Music, and Mathematics and obtained my PhD in Music Technology on the topic of on combining knowledge and machine learning with neuro-fuzzy methods in the automatic analysis of rhythms.

    I am is an associated member of the Institute of Cognitive Science and the Research Department of Music and Media Technology of the University of Osnabrück, as well as the Intelligent Systems Research Laboratory at the University of Reading. I am co-author of the educational software Computer Courses in Music Ear Training Published by Schott Music, which received the Comenius Medal for Exemplary Educational Media in 2000 and co-editor of the Osnabrück Series on Music and Computation. I was a consultant to the NEUMES project at Harvard University and I am a member of the MPEG Ad-Hoc-Group on Symbolic Music Representation (SMR), working on the integration of SMR into MPEG-4. I was the principal investigator at City in the music e-learning project i-Maestro which was supported by the European Commission (FP6). I currently work on methods for automatic music analysis and transcription, audio-based similarity and recommendation, Semantic Web representations for music and general applications of audio processing and machine learning in industry and science. I have received funding from the AHRC for the Digital Transformations Project Digital Music Lab - Analysing Big Music Data (DML), a joint project with the British Library, Queen Mary University of London, University College London, and I Like Music. More recently we started the AHRC Amplification Project on An Integrated Audio-Symbolic Model of Music Similarity where we apply the results from the DML. I was also engaged as a co-investigator in a project funded by Innovate UK (formerly Technology Strategy Board) and EPSRC on Advancing Consumer Protection Through Machine Learning: Reducing Harm in Gambling and the Innovate UK project Raven led by Tom Chen.


    Here is a link to my standard staff homepage.

    Key Publications

    Students: for meetings, please send me an e-mail.