Interdisciplinary graduate student team aims for holistic approach to cybersecurity


A team of graduate students in information sciences, psychology, and special education is working toward a comprehensive approach to cybersecurity. Their project, “Language of Deceivers: Understanding Linguistic Characteristics of Scams and Bias on Social Media,” funded by the National Science Foundation (NSF) aims to create a more integrated understanding of how different populations, such as individuals with autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), are affected by cybercrimes and how they can be better protected.  

"We're taking a holistic approach, trying to prevent scams, trying to understand how people evaluate scams. Then, we're trying to inform people about the scams and see how all these things kind of come together to make the world a little bit safer," said Anuridhi Gupta, a doctoral student in information sciences and technology.  

The researchers are working on various fronts, drawing on their backgrounds in information sciences and technology, psychology, special education, and linguistics; they are focusing on enhancing machine learning models for scam detection, understanding human cognitive responses to scams, and suggesting best practices for individuals with ASD or ADHD online.  

Machine-learning scam detection 

During the initial phase of research, doctoral student in information sciences and technology Anuridhi Gupta, analyzed a machine learning model designed to detect scams. Upon running the machine learning model on a dataset of 31,000 tweets, Gupta found the model's accuracy was in detecting scams was only 71 percent. Moreover, Gupta found the false positive rate was notably high at 32 percent. So, 32 out of 100 messages were incorrectly identified as legitimate, highlighting a significant bias in the model's ability to accurately classify scam messages. 

Individuals’ scam detection 

Alongside Gupta’s research, psychology doctoral student Tomas Lapnas assessed how people’s eye movements varied when reading 100 different tweets to discern whether they constituted scams. He was intrigued to find a negative correlation between the amount of time people spend evaluating tweets and their accuracy in detecting scams.  

"What we find--kind of strangely--is that tracking metrics such as how much time people are spending looking at the tweets and how much time they spend reevaluating the tweets actually has a negative correlation with their performance," he said. In other words, the longer a subject looked at a scam tweet, the more likely the subject was fall victim to the scam. 

Lapnas found individuals often rely on quick heuristics, such as the presence of financial information, to make judgments about the presence of a scam in a tweet; however, when such heuristics are absent or misleading, their performance deteriorates. 

"Their sensitivity was much higher when there was financial information present, meaning they had better detection probabilities," he said. 

This reliance on simple heuristics can be problematic. Scammers are becoming increasingly sophisticated, crafting messages that do not fit easily recognizable patterns, thus evading detection.  

Overall, Gupta and Lapnas’ findings indicate that current scam detection strategies, both human- and machine-based, need to evolve to consider more subtle cues and patterns. 

Educational and accessibility implications  

Ensuring accessibility in IT training modules is crucial, said Hannah Choi, who is earning a Master of Education in special education. Features such as speech-to-text, image enlargement, and adjustable reading speeds can significantly aid individuals with disabilities in scam detection.  

"If the accessibility features are not always available, it gets harder," she said. Moreover, the study highlighted the necessity of updating training materials regularly to include the latest scam detection techniques and awareness programs. 

Future Directions 

The team’s research is ongoing, with efforts to include more individuals with ASD and ADHD to refine the analysis of eye-tracking data and detection performance. Their ultimate goal is to develop training modules and tools that improve scam detection across several different populations in addition to enhancing machine learning models to improve accuracy in scam detection. This involves creating models that can better identify less obvious scams by analyzing a broader range of indicators beyond just financial information. 

As cyber threats continue to evolve, so must our strategies for combating them. This team’s research underscores the importance of a nuanced approach, combining advanced machine learning techniques with an understanding of human cognitive processes. Addressing the biases in current models and improving educational and accessibility features can make significant strides towards a safer digital environment for all.