Quantum Computing has shown a lot of promise for many domains as it exhibits properties like entanglement and superposition that do not exist in classical scenarios. One area that is expected to profit greatly from this is machine learning. At the SFI CRT in ML, we have partnered with Equal1 labs, a quantum computing start-up, to explore how to leverage the potential of quantum computing for machine learning. This research is led by Dr Simon Caton and the PhD student Patrick Selig.
When thinking about machine learning using a quantum computer, a lot of the basics stay the same. We still have training data, we still have to train a model, there are parameters to optimise, and we can still derive a loss function (or similar), and evaluate it using a test set. However, things very quickly get very different after this point. The instruction set for quantum programs is fundamentally different, as is the manner with which we represent a quantum program. There are challenges transforming data between a classical and quantum representation for model input and output. Unlike machine learning in classical scenarios, there is a general lack of understanding and guidance concerning the “right” way(s) to design and train quantum machine learning models. If that wasn’t enough, modern day quantum computers are still quite error prone. Thus, our work has been looking to derive guiding principles for how to design and train machine learning models using noisy and intermediate scale quantum (NISQ) computers.
In our work, we have explored classification problems described using only a continuous feature vector. Categorical and discrete data are still hard to represent well in a quantum program and having a numerical response (e.g. for regression) also adds complexity in transforming model output from a quantum to classical representation. We build quantum machine learning models using variational principles, aptly referred to as variational quantum machine learning, as there is some evidence that models of this type can withstand certain amounts of error in (quantum) calculations. In this setting, model parameters are classical while the computation of the prediction based on the input, i.e. training process, is performed on the quantum computer. The resulting machine learning model is a parameterised quantum program expressed as a quantum circuit over a number of quantum bits (or qubits). In training, we seek to optimise these parameters.
To design a variational quantum machine learning program for classification, there are two key considerations: how many qubits to use (program width) and “long” (or deep) the program is, i.e. how many quantum operations (or gates) occur along the program’s critical path. Both these considerations affect the expressivity and complexity of the function the program represents. While this may, on the surface, appear like a simple 2 dimensional optimisation problem, the design space grows fast, very fast, especially when quantum entanglement operations are leveraged. To date, we have evaluated over 6500 unique quantum circuits (each representing a candidate machine learning model) using 7 “easy” machine learning datasets. We find that in general shallow (low depth) wide (more qubits) circuit topologies tend to outperform deeper ones. We also explored the implications and effects of different notions of noise (causes of computation error) and have evaluated circuit topologies that are more / less robust to noise for classification machine learning tasks. Based on our findings we have generated a set of guidelines for designing circuit topologies that show near-term promise for the realisation of quantum machine learning algorithms using gate-based NISQ quantum computer.