July 2, 2024

Koo launches new safety features for proactive content moderation

0

 

India’s microblogging platform, Koo announced the launch of new proactive content moderation features designed to provide users with a safer and secure social media experience. The new features developed in-house are capable of proactively detecting and blocking any form of nudity or child sexual abuse materials in less than 5 seconds, labeling misinformation and hiding toxic comments and hate speech on the platform.

As per company, it has identified few areas which have a high impact on user safety i.e child sexual abuse materials, toxic comments and hate speech, misinformation and disinformation, Koo working to actively remove their occurrence on the platform. The new content moderation features are an important step towards achieving this goal.

Safety Features:

Nudity: Koo’s in house ‘No Nudity Algorithm’ proactively and instantaneously detects and blocks any attempt by a user to upload a picture or video containing child sexual abuse materials or nudity or sexual content. These detections and blocking take less than 5 seconds.

Toxic Comments and Hate Speech:

Actively detects and hides or removes Toxic Comments and Hate Speech in less than 10 seconds so they are not available for public viewing.

Violence:

Content containing excessive blood / gore or acts of violence are overlaid with a warning for users.

Impersonation:

Koo’s in-house ‘MisRep Algorithm’ constantly scans the platform for profiles who use the content or photos or videos or descriptions of well-known personalities to detect impersonated profiles and block them. On detection, the pictures and videos of well known personalities are immediately removed from the profiles and such accounts are flagged for monitoring of bad behavior in the future, the company claims.

Mayank Bidawatka, Co-founder, Koo said, “At Koo, our mission is to unite the world and create a friendly social media space for healthy discussions. We are committed to providing the safest public social platform for our users. While moderation is an ongoing journey, we will always be ahead of the curve in this area with our focus on it. Our endeavor is to keep developing new systems and processes to proactively detect and remove harmful content from the platform and restrict the spread of viral misinformation. Our proactive content moderation processes are probably the best in the world!”

The post Koo launches new safety features for proactive content moderation appeared first on Techlusive.

 

 

India’s microblogging platform, Koo announced the launch of new proactive content moderation features designed to provide users with a safer and secure social media experience. The new features developed in-house are capable of proactively detecting and blocking any form of nudity or child sexual abuse materials in less than 5 seconds, labeling misinformation and hiding toxic comments and hate speech on the platform.

As per company, it has identified few areas which have a high impact on user safety i.e child sexual abuse materials, toxic comments and hate speech, misinformation and disinformation, Koo working to actively remove their occurrence on the platform. The new content moderation features are an important step towards achieving this goal.

Safety Features:

Nudity: Koo’s in house ‘No Nudity Algorithm’ proactively and instantaneously detects and blocks any attempt by a user to upload a picture or video containing child sexual abuse materials or nudity or sexual content. These detections and blocking take less than 5 seconds.

Toxic Comments and Hate Speech:

Actively detects and hides or removes Toxic Comments and Hate Speech in less than 10 seconds so they are not available for public viewing.

Violence:

Content containing excessive blood / gore or acts of violence are overlaid with a warning for users.

Impersonation:

Koo’s in-house ‘MisRep Algorithm’ constantly scans the platform for profiles who use the content or photos or videos or descriptions of well-known personalities to detect impersonated profiles and block them. On detection, the pictures and videos of well known personalities are immediately removed from the profiles and such accounts are flagged for monitoring of bad behavior in the future, the company claims.

Mayank Bidawatka, Co-founder, Koo said, “At Koo, our mission is to unite the world and create a friendly social media space for healthy discussions. We are committed to providing the safest public social platform for our users. While moderation is an ongoing journey, we will always be ahead of the curve in this area with our focus on it. Our endeavor is to keep developing new systems and processes to proactively detect and remove harmful content from the platform and restrict the spread of viral misinformation. Our proactive content moderation processes are probably the best in the world!”

The post Koo launches new safety features for proactive content moderation appeared first on Techlusive.