Have you shared a fake news story before? You and twenty-three percent of Americans in a 2016 survey have that in common. And in a separate study from 2021, ninety-one percent blame social media companies for the spread of misinformation. Michael Feffer, ARCS Pittsburgh Chapter Scholar and PhD candidate in social computing at Carnegie Mellon University, confirms artificial intelligence is the problem.
Since taking undergraduate courses on artificial intelligence (AI) and machine learning at Massachusetts Institute of Technology (MIT), Feffer has become enamored with the technology. “Back then, it was learning what could be done,” explains Feffer, “Now, I’m doing a PhD program, where I’m studying these techniques and how they affect society.”
Before delving into the pros and cons of this trendy and sensationalized technology, Feffer defines AI as a moving target as it performs many actions. “Generative art and text systems like ChatGPT are examples,” shares Feffer, “but there are also examples like text processing, imaging processing, and facial recognition and tagging.”
Feffer shares several significant benefits of using AI, such as automation, making sense of large amounts of data by finding patterns, and its processing power. Examples include processing numbers and records, copying text from an image into a Word document, and being able to scan checks on our phones to be deposited into our bank.
However, there are several downsides to incorporating this technology into our society. The first one, which most people bring up, is job security. Since AI can automate much faster than an individual, there’s an ongoing fear that AI will replace jobs such as driving and data analysis. Feffer explains the dilemma better, “If you’re putting these people out of work, is there a safety net? Or what will those people be able to fall back on?” Feffer suggests that people working in the AI space need to consider these important questions.
One of the biggest problems with AI is its creation and promotion of discrimination and bias. The cause? Our own human flaws. “Unfortunately, there are various biases that currently exist in society, ranging from who gets employed to who gets arrested and sent to prison,” says Feffer, “and because humans are flawed, AI doesn’t know that as it excels at pattern recognition and just replicates the patterns of discrimination and biases.”
The second largest problem is social media polarization. Because AI is so successful at identifying patterns, many social media platforms use AI to maximize engagement on their platform. It’s why Netflix recommends shows for what you want to watch, and social media platforms show news that might be fake or polarizing to get the user to stay on the platform and engage with the content.
Feffer reassures that it’s not all doom and gloom and there are solutions. The first is talking to impacted individuals who experienced bias or discrimination caused by AI and learning what can be built to help AI become better. Second, there are mathematical techniques to be applied to change the outputs so they’re less discriminatory. He confirms, “Various approaches are employed and explored right now to remedy the harms caused by these systems.”
You can be part of the solution too! Feffer shares, “We need to have healthy amounts of skepticism and do some due diligence and user-based fact-checking” to counteract untrustworthy news and social media polarization.
Feffer is grateful for the scientific community he has inherited with the ARCS Scholar Award. “It’s connected me to interesting individuals and people doing interesting research,” he shares, “and I’m able to keep up with research and areas beyond my expertise.”