Date of Award
4-30-2024
Document Type
Dissertation
Degree Name
Doctor of Philosophy in Machine Learning
Department
Machine Learning
First Advisor
Dr. Bin Gu
Second Advisor
Dr. Karthik Nandakumar
Abstract
"This thesis presents an exploration of online pairwise learning, extending the traditional boundaries through innovative algorithms and the application of Spiking Neural Networks (SNNs) for deep metric learning. The initial chapters introduce and refine advanced online gradient descent (OGD) algorithms tailored for pairwise learning in non-i.i.d. environments, showing their scalability and efficiency in handling quadratic growth in computational complexity and extending their applicability to kernelized models. Our proposed OGD methods, characterized by sub-linear regret and reduced complexity, demonstrate marked superiority over existing kernel and linear models across various datasets. Building upon these foundations, the exploration continues into the domain of deep metric learning with SNNs, with attention into the robustness of rate encoding in SNNs, a critical aspect that directly enhances pairwise learning strategies. By bridging the conceptual gap between SNN robustness and pairwise learning, this work illuminates the multifaceted interplay between encoding stability, metric learning effectiveness, and the overarching robustness of learning systems. In a complementary exploration, proposing a novel approach that takes advantage of the temporal dynamics of binary encoding to achieve high-dimensional data representation with low latency and superior accuracy. This methodology illustrates the potential of temporal dimension utilization within the SNN framework for enhanced discriminative capabilities in the metric space. Overall, this thesis not only advances the state of the art in online pairwise learning and deep metric learning with SNNs, but also opens new avenues for research by highlighting the importance of robustness in rate encoding and its implications for pairwise learning methodologies. Through a blend of theoretical innovation and empirical validation, we showcase the synergy between these seemingly disparate domains, paving the way for more resilient and accurate machine learning models in complex environments."
Recommended Citation
H. AlQuabe'h, "Advancements in Memory-Efficient and Variance-Optimized Pairwise Learning, Extended to Nonlinear Modeling and Spiking Neural Networks,", Apr 2024.
Comments
Thesis submitted to the Deanship of Graduate and Postdoctoral Studies
In partial fulfilment of the requirements for the PhD degree in Machine Learning
Advisors: Dr. Bin Gu, Dr. Karthik Nandakumar
Online access available
Copyright by the author