December 24, 2024

OpenAI is working on a new tool to help you spot AI-generated images and protect you from deep fakes

0

 

[[{“value”:”

You’ve probably noticed a few AI-generated images sprinkled throughout your different social media feeds – and there are likely a few you’ve probably scrolled right past, that may have slipped your keen eyes. 

For those of us who have been immersed in the world of generative AI, spotting AI images is a little easier, as you develop a mental checklist of what to look out for.

However, as the technology gets better and better, it is going to get a lot harder to tell. To solve this, OpenAI is developing new methods to track AI-generated images and prove what has and has not been artificially generated.

According to a blog post, OpenAI’s new proposed methods will add a tamper-resistant ‘watermark’ that will tag content with invisible ‘stickers.’ So, if an image is generated with OpenAI’s DALL-E generator, the classifier will flag it even if the image is warped or saturated.

The blog post claims the tool will have around a 98% accuracy when spotting images made with DALL-E. However, it will only flag 5-10% of pictures from other generators like Midjourney or Adobe Firefly

So, it’s great for in-house images, but not so great for anything produced outside of OpenAI. While it may not be as impressive as one would hope in some respects, it’s a positive sign that OpenAI is starting to address the flood of AI images that are getting harder and harder to distinguish.

Okay, so this may not seem like a big deal to some, as a lot of instances of AI-generated images are either memes or high-concept art that are pretty harmless. But that said, there’s also a surge of scenarios now where people are creating hyper-realistic fake photos of politicians, celebrities, people in their lives, and more besides, that could lead to misinformation being spread at an incredibly fast pace.

Hopefully, as these kinds of countermeasures get better and better, the accuracy will only improve, and we can have a much more accessible way to double-check the authenticity of the images we come across in our day-to-day life.

You might also like

The 5 subtle AI announcements Apple made at its big iPad 2024 launch eventOpenAI’s Sora just made another brain-melting music video and we’re seeing a themeCould generative AI work without online data theft? Nvidia’s ChatRTX aims to prove it can
“}]]