Sayart.net - K-pop Idols Face Persistent Harassment on Weverse, Raising Concerns Over Platform’s Moderation

  • September 06, 2025 (Sat)

K-pop Idols Face Persistent Harassment on Weverse, Raising Concerns Over Platform’s Moderation

ReaA JUNG / Published February 5, 2025 08:44 PM
  • -
  • +
  • print
Courtesy of Weverse

Despite Weverse’s claims of maintaining a safe and respectful community, K-pop idols using the Hybe-operated global fandom platform continue to be targeted with unfiltered hate comments and cyber harassment. Fans have repeatedly expressed concerns that such messages violate artists’ rights, yet existing moderation measures appear insufficient in preventing these incidents.

On January 8, aespa’s Winter encountered malicious comments while communicating with fans on Weverse. A user repeatedly posted abusive remarks, including "Why aren’t you dead yet?" alongside cigarette emojis. Winter responded calmly, saying, "Smoking is bad for your health," with a heart emoji. When the user insulted her appearance, calling her "dog-faced," Winter replied with "puppy-faced," subtly deflecting the attack. While she handled the situation gracefully, fans voiced concern about the potential emotional impact and criticized Weverse for allowing such comments to persist. Instead of the platform intervening, SM Entertainment, aespa’s agency, announced that it would collect evidence of malicious comments and take legal action.

This is not the first time Weverse has faced criticism for failing to prevent cyberbullying. In January 2023, during a live broadcast by fromis_9’s Lee Chae Young, a malicious user targeted fellow member Baek Ji Heon and her parents with offensive remarks. The situation caused visible discomfort for Chae Young, who abruptly ended the broadcast, citing the late hour. Fromis_9’s fanbase, Flover, later issued a statement condemning the platform’s inaction, highlighting that similar incidents had been occurring for over a year.

In September 2022, Oh My Girl’s YooA also became a victim of hate comments on Weverse. Unlike other idols who chose to ignore such messages, YooA responded sarcastically, saying, "Writing hate comments like that really takes effort. You deserve applause. Let’s give them a round of applause—stay strong!" While her reaction amused some fans, it underscored the persistent issue of Weverse’s inability to effectively moderate its platform.

On January 24, Weverse outlined its community guidelines, stating that harmful messages are flagged and that users who violate the rules may have their posts restricted, their community access limited, or, in repeated cases, face permanent suspension. The platform also highlighted its AI moderation tool, Cleanbot, developed in collaboration with Naver, which is designed to detect and remove malicious comments. Additionally, messages sent to artists through Weverse DM are monitored by operators and automated detection technologies.

Despite these measures, many fans argue that the problem remains unresolved. A K-pop fan active on Weverse, speaking anonymously, stated, "Hate comments attacking idols have always existed. There's no effective tool to prevent cyberbullying targeting artists." The fan also revealed that some artists have taken to posting screenshots of malicious messages themselves, openly criticizing the platform’s failure to take action.

As a countermeasure, some fan communities have begun responding to hate comments in real-time. When malicious remarks appear in chatrooms or live broadcasts, fans flood the space with positive messages to push the negative ones out of view, ensuring that artists do not have to see them. "This is by far the most effective countermeasure," one fan explained, highlighting the lack of sufficient platform intervention.

With repeated incidents involving multiple artists, fans are demanding stricter enforcement of community guidelines and a more proactive approach to moderation. While Weverse has positioned itself as a major platform connecting idols with fans, its failure to fully address online harassment raises concerns about the mental well-being of artists who use the service.


Sayart / ReaA JUNG, queen7203@gmail.com

Courtesy of Weverse

Despite Weverse’s claims of maintaining a safe and respectful community, K-pop idols using the Hybe-operated global fandom platform continue to be targeted with unfiltered hate comments and cyber harassment. Fans have repeatedly expressed concerns that such messages violate artists’ rights, yet existing moderation measures appear insufficient in preventing these incidents.

On January 8, aespa’s Winter encountered malicious comments while communicating with fans on Weverse. A user repeatedly posted abusive remarks, including "Why aren’t you dead yet?" alongside cigarette emojis. Winter responded calmly, saying, "Smoking is bad for your health," with a heart emoji. When the user insulted her appearance, calling her "dog-faced," Winter replied with "puppy-faced," subtly deflecting the attack. While she handled the situation gracefully, fans voiced concern about the potential emotional impact and criticized Weverse for allowing such comments to persist. Instead of the platform intervening, SM Entertainment, aespa’s agency, announced that it would collect evidence of malicious comments and take legal action.

This is not the first time Weverse has faced criticism for failing to prevent cyberbullying. In January 2023, during a live broadcast by fromis_9’s Lee Chae Young, a malicious user targeted fellow member Baek Ji Heon and her parents with offensive remarks. The situation caused visible discomfort for Chae Young, who abruptly ended the broadcast, citing the late hour. Fromis_9’s fanbase, Flover, later issued a statement condemning the platform’s inaction, highlighting that similar incidents had been occurring for over a year.

In September 2022, Oh My Girl’s YooA also became a victim of hate comments on Weverse. Unlike other idols who chose to ignore such messages, YooA responded sarcastically, saying, "Writing hate comments like that really takes effort. You deserve applause. Let’s give them a round of applause—stay strong!" While her reaction amused some fans, it underscored the persistent issue of Weverse’s inability to effectively moderate its platform.

On January 24, Weverse outlined its community guidelines, stating that harmful messages are flagged and that users who violate the rules may have their posts restricted, their community access limited, or, in repeated cases, face permanent suspension. The platform also highlighted its AI moderation tool, Cleanbot, developed in collaboration with Naver, which is designed to detect and remove malicious comments. Additionally, messages sent to artists through Weverse DM are monitored by operators and automated detection technologies.

Despite these measures, many fans argue that the problem remains unresolved. A K-pop fan active on Weverse, speaking anonymously, stated, "Hate comments attacking idols have always existed. There's no effective tool to prevent cyberbullying targeting artists." The fan also revealed that some artists have taken to posting screenshots of malicious messages themselves, openly criticizing the platform’s failure to take action.

As a countermeasure, some fan communities have begun responding to hate comments in real-time. When malicious remarks appear in chatrooms or live broadcasts, fans flood the space with positive messages to push the negative ones out of view, ensuring that artists do not have to see them. "This is by far the most effective countermeasure," one fan explained, highlighting the lack of sufficient platform intervention.

With repeated incidents involving multiple artists, fans are demanding stricter enforcement of community guidelines and a more proactive approach to moderation. While Weverse has positioned itself as a major platform connecting idols with fans, its failure to fully address online harassment raises concerns about the mental well-being of artists who use the service.


Sayart / ReaA JUNG, queen7203@gmail.com

WEEKLY HOTISSUE