Explainable multi-criteria optimisation algorithms for land-use change in Ireland
Combining data from agricultural, economic, meteorological, geological and demographic sources with satellite imagery to understand land-use changes over time and identify Pareto optimal conditions for future land use satisfying multiple criteria including economic output, finance, resource utilisation, supply and demand fluctuation due to population and demographic changes and climate change targets. A variety of methods will be explored in order to examine trade-offs in algorithm performance and interpretability. Reinforcement learning, Evolutionary, Classification Tree (combined with Monte Carlo Search methods) and Markov Decision Process based algorithms all provide themselves as candidates with varying degrees of success in other applications. The goal is to provide a set of Pareto-optimal trade-off solutions that would allow decision makes to compare different balances of conflicting objectives.
Multi-modal user modelling to enhance productivity, memory and health
This project will explore how multi-modal context-aware user models can be used for the purposes of enhancing productivity by allowing computer systems in real-time to intelligently adapt to a user’s state at that point in time and the context of their current mental state. Imagine sitting at your computer being bombarded with requests over instant messaging while trying to complete other tasks. This is something you might be able to manage on any other day without problem, but today you are having difficulty because you are tired as you didn’t sleep, and the three coffees you had earlier in the day are not having the desired effect. The systems and software you use daily cannot adapt appropriately to this because they are not aware that you woke too early and haven’t eaten, and that in that moment you were feeling overwhelmed with the task load. Multi-modal context-aware user-modelling would allow for such systems to be aware of your current mental state in that moment by leveraging signals (neural, physiological, lifelog, etc) captured up to that point in time. Furthermore, having such information would enable powerful personal information retrieval and summarization systems that would allow you to understand what areas of your work or day have been most impacted in terms of productivity, enabling you to change or identify behaviors (or how you schedule work tasks) in order to achieve optimal throughput. By using signals produced from the body such as EEG (Electroencephalography), EOG, movement, heart rate, GSR and breathing, in concert with the capture of signals from our environment such as those from a lifelog camera or computer interaction, this research aims explore the types of multi- modal context-aware user models that can be built based on machine learning that can be used for the purposes of enhancing productivity by allowing computer systems in real-time to intelligently adapt to a user’s state at that point in time and the context of their mental current state.
Machine Learning Solutions for Quality-Energy Balancing for Rich Media Content Delivery in Heterogeneous Network Environments
Machine Learning (ML) enables scientists to design self-learning solutions which solve important problems which are too complex to solve with classic methods. As the demand for mobile traffic is increasing day by day, 5G networking is intended to govern the infrastructure in the telecommunication industry. This project will design a set of ML solutions to address Quality-Energy balancing when delivering rich media content over heterogeneous 5G networks. The proposed solutions will balance the rich media content, including multi-sensorial video and VR, important time and bitrate requirements, with energy efficiency goals set for devices and networks. Machine Learning solutions are predicted to help in making the 5G solution feasible. The project will involve network simulations and prototyping Bringing Machine Learning Solutions/Algorithms into 5G infrastructure for various applications involves a lot of challenges that need to be addressed before beginning any project or research: 1. Interpretability of results 2. Computational Power required by ML Algorithms 3. The Long training times of some ML Algorithms 4. Maximization of the utilization of the unlicensed spectrum 5. Opportunistic exploitation of white spaces 6. Adaptive Leasing between carriers 7. To run the new applications like VR, multi-sensorial videos above 30 GHz in a mobile phone, the upcoming mobile phones and devices need smaller and adaptive antennas to receive the higher frequency waves. 8. Most important Challenge for the delivery of rich media content is the availability of real data. Any ML algorithm needs high quality data for it’s working and the type of data decides which type of learning to use. Generating datasets from computer simulators (Ns3) is not always a good practice as the ML algorithm will end up learning the rules and the environment with which the simulator was programmed. The main point of using ML is to learn from the real data which will not happen when we generate datasets from computer simulators. The availability of real datasets available for 5G is one of the biggest challenges. 9. Another important challenge is to apply the correct kind of distribution to our specific 5G application and which algorithm works well on the specific data. The end goal of ML algorithms is to optimize and improve the delivery of rich media content over heterogeneous 5G Networks.
Using Learner Digital Footprints in Recommending Content in Learning Environments
The area of educational data mining has used some aspects of a learner’s digital footprint in order to predict outcome (grade in the final examination) or recommend some content of importance to help the learner. This has included timestamped clicks of pages viewed, prior performance and external data such as previous exam performance, timetables, earlier results, outputs from automatic assessments, etc. While these have each led to improved experiences for the learners, who are able to gauge their own progress and who also can be given personal recommendations for content they should view, the models used to create these predictions and recommendations are limited in that they use only a small portion of the learner’s digital footprint and that small portion is static and does not, and cannot account, for much insight into the learner’s state of mind at the time of prior use of the system. To capture digital footprints, we introduce two unique and independent monitoring mechanisms including keyboard dynamics and webcam-based attention applications. Keyboard dynamics is about capturing patterns of learners typing information, especially timing associated with bigrams or pairs of adjacently-typed alphanumeric characters. Similarly, webcam-based attention application runs on learner’s laptop, which monitors their facial attention while they are attending an online zoom session or reading material on screen or watching an educational video, and records an attention log. These methods preserve information with cost-neutral sources of interaction log, to give deeper insights into a learner’s state of mind, stress, and fatigue while interacting with digital content. We propose to enhance the modelling capabilities of a learning recommender system by capturing more of the learner’s state of mind during interaction with the system. Is she interested or bored with the content, is the learner engaged or distracted with the system because she is not motivated or because she is tired, stressed or cognitively distracted by some other task or outside influence. The first research challenge is related to data packaging of keyboard dynamics according to the mental state of learners. Another challenge is to do investigation about finding best models to calculate webcam attention graphs efficiently. Furthermore, comprehensive research is required for estimation of aggregated attention graphs which can be used for recommender systems. Also, all of this has to be done in a GDPR-compliant way so that users feel comfortable about recording such data about themselves.
Knowledge Transfer from Text Annotations towards more Effective Learning for Computer Vision
Computer Vision models have achieved human-level accuracy in certain tasks like classification and localization by leveraging large annotated datasets, leading to widespread adoption in several domains. However, in fields like medical diagnostics, adoption is still hampered by the scarcity and/or cost of annotated data. Recently, several works in few-shot learning and self-supervised learning have tried to learn from a limited amount of annotated data, but with limited success. A recent analysis (W.Chen et al 2019) of few-shot algorithms shows that a simple baseline that finetunes a deep model is as good as current state-of-the-art few-shot learning algorithms and fares better in the realistic scenario of a non-negligible domain shift between the train and test sets. Another such analysis (Y. Asano et al 2020) of self-supervised learning methods suggests that unlabelled images aid only in learning low-level features of the initial layers and are not sufficient to learn discriminative mid-level or high-level features. Both these analyses suggest that visual information alone is not enough to perform well on computer vision tasks in the annotation scarce scenario. In contrast to deep learning-based models, humans can learn to recognize new objects or point them out in images from just a handful of labeled examples. One possible reason why humans can understand objects/concepts from a few examples is because of the existence of an external representation of information about the world from prior experiences. Inspired by this, this research project aims to explore how prior knowledge can be modeled and how it can be used to improve the performance of vision models in a limited annotation.scenario. The objectives of this research project are: 1. Develop a knowledge model of the world from a text corpus and already annotated images. Natural language text is a rich source of knowledge. Semantic relationships between objects can be modeled from language to produce a knowledge representation (G Miller 1995). Here we intend to explore how annotated images can be jointly modeled with natural language to produce knowledge prior. 2. Explore how information can flow from this knowledge model to the vision model to improve performance in few-shot learning. 3. Explore how information from this knowledge model can aid in getting more discriminative feature representations in self-supervised learning.
Federated Learning on IoT edge devices using serverless ML
The Internet of Things (IoT) has extensively become involved in different aspects of modern life. Nowadays, we see sensors being deployed in our surroundings and becoming an integral part of our day to day life. With overall improved software architecture, rapid increases in computing power, and embedded decision-making abilities in machines, users now interact with more intelligent systems and many intelligent IoT services and applications are emerging. The typical processing pipeline for IoT applications is that all sensor data is collected and stored on the cloud, where it is used to train various machine learning algorithms. Once trained these algorithms are deployed locally at the edge devices. However, heterogeneity of IoT devices and sensor networks is a major challenge to build these intelligent IoT applications. The ML algorithms designed for IoT devices and edge analytics have to be re-designed, re-trained, and then re-deployed for each type of IoT device joining the IoT infrastructure. The aim of this work is to come up with that better architecture using serverless programming. Serverless ML can prove to be a major step forward in enabling seamless integration of edge analytics for a variety of IoT devices without a need to build a customized ML algorithm for each type of device. Thus, facilitating data scientists to focus on the domain problem rather than the configuration and deployment of ML algorithms over IoT devices. Moreover, serverless architecture brings in scalability inherently and could prove a door way to many intelligent applications. At the later stages of my project, I will use the serverless architecture for edge analytics to deploy distributed and federated learning algorithms on top of the large-scale IoT infrastructure. The ultimate goal will be to automatically train and deploy distributed and federated learning on IoT devices, which can support building distributed intelligent IoT applications without worrying about the heterogeneity of underlying IoT infrastructure.
An Automated Diabetic Retinopathy Screening and Classification System
Diabetic retinopathy is the chronic eye disease that is the principal cause of permanent vision loss. An automated diabetic retinopathy screening and classification system cannot only aid the ophthalmologists with the efficient, accurate and timed diagnosis of diabetic retinopathy but can also classify diabetic retinopathy according to the severity level. Depending on the severity, the appropriate treatment of the patient can be initiated without any delay. The research questions that will primarily be investigated in the intended research are the diabetic retinopathy screening, grading of diabetic retinopathy into a specific level, and identification of different retinal pathological structures. The main challenge will be to tackle the intensity similarities between the pathological structures (like exudates) and the retinal features (like the optic disc). Furthermore, optic disc detection is highly reliant on photographic illuminations. The poor illumination condition will result in a very dark optic disc region. The accuracy will be highly degraded when the pathological structures are wrongly identified as retinal structures and are removed and vice versa.
As IoT devices become more widespread, creating more and more data, it is no longer credible that cloud computing can absorb and process all of the data, analysis and decision-making involved. Whether in terms of bandwidth, computing power, or algorithmic adaptability, new architectures and new machine learning (ML) techniques need to emerge to meet these new IoT needs. Edge computing is one of the recent techniques that allows some of the processing to be performed locally before being sent to the Cloud for analysis. Using an edge processor it is possible to move part of the intelligence and adaptability from the Cloud directly to the local IoT mesh network. However, the computing power and power requirements of such edge devices remains a limitation for ML frameworks meaning that ML techniques must be optimized or custom designed.