Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Madhuri Sawant

Project Title

Advancing Digital Citizenship with XAI: Multilingual & Multimodal Approaches for Detecting Problematic Speech in Social Media

Project Description

Monitoring and moderating problematic speech on social media platforms have become increasingly complex due to the exponential growth of user-generated data. Traditional approaches have proven inadequate in effectively detecting and addressing problematic speech, given social media content’s volume, diversity, and linguistic variations. Furthermore, the cross-cultural nature of social media introduces additional challenges, as what may be acceptable in one cultural context can be deeply offensive in another. This research proposal aims to leverage multilingual and multimodal approaches to develop innovative strategies for detecting and mitigating problematic speech in social media. By integrating advanced natural language processing (NLP) techniques, computer vision algorithms, and Explainable Artificial Intelligence (XAI) methods, the accuracy, granularity, and transparency of problematic speech detection systems can be enhanced. The consequences of problematic speech extend beyond the digital realm, impacting mental health, perpetuating societal divisions, and exacerbating offline conflicts. Addressing this issue requires a holistic approach that combines technological advancements with societal awareness and responsible digital citizenship. Thus, this research proposal emphasises the importance of fostering digital literacy and promoting ethical online behaviour, empowering users to critically assess and respond to problematic speech. Aligned with growing concerns surrounding the problematic speech, this research draws inspiration from comprehensive digital policy frameworks, such as the EU’s efforts to tackle online disinformation. By aligning with these initiatives, the research aims to contribute to the broader discourse on digital governance, providing actionable insights and recommendations for policymakers, social media platforms, and civil society organisations. Through the proposed research objectives, methodology, and expected outcomes, this study seeks to make a meaningful contribution to detecting and mitigating problematic speech in social media. By embracing multilingual and multimodal approaches and promoting digital literacy, the aim is to create a more inclusive and respectful online environment where freedom of expression coexists with responsible and accountable communication practices.

Aim: This research aims to develop effective and robust multilingual and multimodal approaches for detecting and mitigating problematic speech in social media. By leveraging advanced natural language processing techniques, computer vision algorithms, and Explainable AI methods, this research seeks to enhance problematic speech detection systems’ accuracy, granularity, and transparency, ultimately fostering a safer and more inclusive online environment.

The research objectives that define the scope of the project are:

  • Investigate and analyse the characteristics and patterns of problematic speech across different languages and cultural contexts, considering linguistic variations, cultural nuances, and contextual factors.
  • Develop advanced multilingual models capable of accurately detecting and categorising problematic speech by integrating state-of-the-art natural language processing (NLP) techniques.
  • Explore the integration of multimodal approaches by leveraging both textual and visual information from social media posts, enhancing the detection capabilities and addressing the limitations of text-based analysis.
  • Incorporate Explainable AI (XAI) techniques to provide transparent and interpretable explanations for the detection and moderation of problematic speech, promoting accountability, trust, and responsible online behaviour.