AI is helping to combat online abuse from Wimbledon to Main Street

Immaculate lawns Wimbledon are now home to more than just world-class tennis.

This year All England Lawn Tennis Club has gained a new ally in the fight against online harassment: artificial intelligenceThe move reflects a broader trend across industries as organizations grapple with the growing challenge of digital abuse.

Wimbledon’s AI system now scans players’ social media accounts, identifying and flagging offensive content in 35 languages, The Guardian reports. It comes in response to the experiences of players such as Emma Raducanu and Naomi Osaka, who previously withdrew from social media due to online harassment.

According to experts, online businesses are increasingly adopting AI-based reputation monitoring systems. These tools scan social media, review sites, and online forums, providing real-time alerts on mentions of a company or product. By quickly identifying negative sentiment or emerging issues, businesses can quickly respond to customer concerns. Analyzing trends in reviews with AI offers insights into improving products and improving customer service.

“Internet abuse is not just a nuisance, it is a fundamental problem” Peak indicators General Director Nick Louie told PYMNTS.

For example, an eCommerce clothing retailer might spot a pattern of size discrepancy complaints, allowing it to address the issue before it impacts its reputation. This approach aims to maintain brand image, build customer trust, and potentially prevent lost sales.

Digital Abuse

Online abuse has become a widespread problem, affecting both individuals and businesses. Industry estimates suggest that the company reputation can constitute as much as 63% of the market value, which shows the financial consequences of uncontrolled online harassment.

To give this issue some context, Cybercrime reporting center 9,587 complaints of harassment and bullying received in 2023

The problem extends beyond high-profile athletes and large corporations. Local businesses have also become vulnerable to digital attacks, particularly fake reviews. In 2023, Google over 170 million reported blocked fake reviewsan increase of 45% compared to the previous year. This increase underscores the growing nature of online disinformation and its potential to damage reputations.

As online abuse has become more common and sophisticated, AI has become a tool in the fight against digital harassment. These systems use algorithms to analyze patterns and quickly identify suspicious activity.

Google’s approach illustrates the potential of AI in this area. The company’s new algorithm examines review patterns over time, spotting red flags like identical reviews across multiple business pages or sudden spikes in extreme ratings.

However, AI systems are not infallible. False positives can occur, and context-dependent harassment can still slip through filters.

“Artificial intelligence can analyze information chaos faster than ever, but it is not foolproof,” Loui said.

He added that human oversight remains crucial when interpreting content flagged by AI.

Proactive measures

Some organizations are extending AI capabilities beyond detection to proactive response strategies. These systems can suggest appropriate responses to negative feedback, giving companies a 24/7 crisis management team.

This proactive approach allows organizations to identify emerging issues before they escalate and develop strategies to mitigate potential crises. especially valuable in an era where online discourse can quickly shape public opinion and influence business outcomes.

The fight against online abuse is not limited to businesses and athletes. Managers and other public figures are increasingly finding themselves in the digital crosshairs. In January Defender of reputationdigital privacy brand Generalintroduced Total radiusan AI-based service that aims to protect high-status individuals from threats that could compromise their physical safety, in response to growing threats on the Internet.

“Today’s executives, professionals and others who interact with the public are faced with difficult decisions in the real world, which are often amplified and analyzed in the online world,” said Gen President Ondrej Vlcek said at the time of the launch. “This could lead to physical and digital threats against them and those close to them, including family members.”

Artificial intelligence has been used for some time to protect against reputational attacks by identifying fake websites and removing them before they can harm a specific company, SlashNext Email Security+ General Director Patrick Harr told PYMNTS.

“AI can also be used, as in this case, to scour the dark web and the general web for unfounded reputation attacks and protect executives, VIPs and employees from these attacks by developing countermeasures or devaluing/hiding false search results,” he said.

Defender of realityfor example, it uses artificial intelligence to help companies detect deepfakes, Zendata General Director Narayana Pappu told PYMNTS.

“This technology can be used to protect a company’s or employee’s reputation by verifying media content, detecting impersonation attempts, and protecting employee privacy by preventing the spread of fake content that could violate an employee’s privacy or be used for harassment,” he said.

For access to all PYMNTS AI coverage, sign up for the daily newsletter AI Newsletter.