Anyone who has used social media is likely familiar with the CAPTCHA. The CAPTCHA is an automated test for verifying that the user is a human, and not a robot, or “bot” for short. Though it is meant to protect against potential spam bots, automated accounts still exist in amounts significant enough to influence the public on a variety of topics. U.S. News states that it is now possible that bots have the ability to affect public opinion with regards not only to political elections, but also to their own personal health.

According to research scientist Jon-Patrick Allem, of the University of Southern California’s Keck School of Medicine, bots are being used “to talk about health-related behaviors.” More controversial topics of bots’ conversations include gun control, reproductive rights, HIV/AIDS medication, and the association of vaccinations with autism.

 When bots obscure truths about public health, all human Internet users’ efforts to make the most well-informed decisions about the personal health of themselves – and their loved ones – are jeopardized.

Image Source: JGI/Jamie Grill

Allem and other experts’ concerns about malignant bots are grounded in real research like his. In one study, his results revealed that bots were advocating vaping as a safe alternative to the conventional cigarette, contradicting the lack of knowledge on e-cigarettes’ long-term health effects. In another example, APHA communications specialist Megan Lowry noted that after APHA posted a flu shot meme on Facebook, many users very quickly left a large number of anti-vaccination comments. Allem supports her suspicion that the users were actually bots with the claim that only bots would have been able to comment so quickly since bots can be programmed to automatically reply to Facebook posts.

Even if experts are raising false alarms, fake accounts can cause real users’ real misjudgments. Legal action against automated accounts is tricky to enforce. For example, Dr. Ryan Calo of the University of Washington School of Law states that law enforcers may need someone to come forward to claim an account is not run by a bot, which violates the person’s right to remain anonymous online. Outside of these specific legal workings stands the more general observation that technology evolves faster than law develops to contain it, as seen in the example of self-driven vehicles, calling into question whether law enforcement could even contain the rise of automated accounts at all.

If bots can be created for misdeeds, they can fortunately be created for good too. Allem is currently working on a anti-tobacco Twitter bot named Notobot, which uses machine learning to intervene upon detecting a human user who posts pro-tobacco tweets. Notobot can counter the negative effects of tobacco-normalizing bots.

Until we, the human users, receive clearer guidelines for dealing with ill-intentioned bots and fighting against misinformation, the “Block” and “Report” mechanisms still exist for us to deploy against accounts suspected of spam or other deceptive content – human or bot.

Feature Image Source: Code Coding Programming CSS by StockSnap 

Cath Ashley

Author Cath Ashley

Cath is a UC Berkeley alumnus with a Molecular and Cell Biology degree and a Music minor. She is interested in healthcare, public health, health equity, youth/student empowerment, and cats. Her hobbies include chess, social dancing, and soundtrack analysis.

More posts by Cath Ashley