Microsoft, OpenAI, Meta and Google, among other US technology companies, have committed to the US Government to start marking content generated by their artificial intelligence (AI) systems, as would be the case with the popular ChatGPT and Bard smart chats. This was announced by the White House in a statement shared this Friday, in which it is highlighted that companies have decided to apply this measure on a fully voluntary basis.

The group of companies, which also includes Amazon, Anthropic and Inflection, have also committed to the administration of President Joe Biden to carry out internal and external security tests of their AI systems before their launch. This is something that OpenAI failed to do with ChatGPT before releasing it late last year, as many AI experts have lamented over the past few months.

In addition, companies have stated that they will share information on how to reduce risks, increase investment in cybersecurity to protect their systems and user data, and prioritize research on the social risks that AI systems may pose in order to avoid prejudice and discrimination. The latter, something in which these technologies have been having problems for some time.

According to the information shared by the White House, the companies are going to work to develop a kind of watermark that makes it possible to recognize the content that has been generated by an artificial intelligence system. Whether we are talking about text, images or videos. Thanks to this action, the Government hopes that “creativity with AI will flourish and the dangers of fraud and deception will be reduced.”

At the moment, it is not clear how exactly the companies are going to mark the content or if any solution will be implemented that prevents the user from being able to hide it. Nor has any specific deadline been set for their inclusion.

It should be remembered that cybersecurity experts have been warning about the enormous potential of solutions such as ChatGPT and Bard, or the Midjourney and DALL-E image-generating machines, for the generation of large-scale disinformation for some time. Also about its potential use for the creation of cyber-scams and even malicious code.

Over the past few months, several companies, including OpenAI, the creator of ChatGPT, have released tools aimed at recognizing AI-generated content. However, these solutions are far from being fully reliable. The margin of error is too large.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Design a site like this with WordPress.com
Get started