Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Jiwei Zhang

Student

Project Title

Explainable Natural Language Processing for Legal Text Analysis

Project Description

As a result of developments in machine learning, particularly neural networks, a growing number of state-of-the-art technologies are employing deep learning to identify solutions to real-world problems. Due to the complexity of its real-world data, Natural Language Processing (NLP) is a domain in which deep learning techniques have become dominant, particularly for tasks dealing with long text documents.

The document-level classification task is a significant challenge in the research community of NLP because it has a wide range of practical applications, including legal text analysis, sentiment analysis and mapping labels for news articles. A key difficulty for document-level classification tasks is to understand the relations between sentences, which is not easily achievable by traditional approaches like regression models. To achieve document-level understanding, current approaches typically rely heavily on transformer-based neural network modules, such as BERT and its variants (e.g. DocBERT and RoBERTa), XLNet and GPT-3.

However, as the implementation of deep learning neural networks becomes more widespread, additional obstacles emerge. In the majority of instances, when a neural network is employed for downstream tasks, users are only privy to the predicted results but not reasons for those predictions. Neural networks are often referred to as “black boxes” since it is impossible to interpret the actual meaning of the weight matrix. In other words, people have difficulty interpreting the relationship between the inputs and the outputs. Even if the accuracy of the prediction outcomes may be the most important feature in some disciplines, the prediction method must be transparent, understandable, and interpretable.

For legal text classification, it is common that documents are long and domain-specific in terms of their vocabularies. For example, in the legal AI community, there is much research focusing on tasks like categorising legal cases based on legal opinions and legal regulation classification. Classification models generally have complex architectures and consist of several embedding modules and neural network modules, limiting interpretability. Therefore, it is rare for industries or legal departments to make use of these results directly in the real world because of a lack of understanding of why predictions have been made. Legal and business leaders are typically reluctant to rely on opaque models of this type in their decision-making processes.

To find a solution to this problem, the most vibrant area of research is eXplainable Artificial Intelligence (XAI). In the context of this project, we will investigate XAI approaches for long-text document classification tasks in the legal domain. Our research will include but not be limited to current reasoning approaches, state-of-the-art models for long-text classification tasks, interpretability of neural networks, machine learning on weakly-labelled data, and the application of these technologies within the legal domain. With this research project, we aim to maximise the advantages that deep learning approaches have brought, by achieving transparency and interpretability in long-text classification tasks in the legal domain, thus providing a more reliable base up which decisions can be made.