To combat the proliferation of fake news, European Union officials have called for additional measures to increase transparency in artificial intelligence (AI) tools, including OpenAI’s ChatGPT. Vera Jourova, the European Commission’s vice president for values and transparency, emphasized the importance of labeling content generated by AI tools that have the potential to disseminate disinformation.
Jourova stated, “Signatories who have services with a potential to generate AI disinformation should put in place technology to recognize and clearly label such content to users.” This directive aims to ensure that users are aware of the origin of information they encounter online, reducing the risk of unwittingly engaging with misleading or false content.
Furthermore, Jourova highlighted the need for companies integrating generative AI into their services, such as Microsoft’s Bing Chat and Google’s Bard, to establish safeguards that prevent malicious actors from exploiting these technologies for disinformation purposes. The EU’s Code of Practice on Disinformation, established in 2018, serves as an agreement and self-regulatory tool for the tech industry to combat the spread of disinformation.
Prominent technology companies, including Google, Microsoft, and Meta Platforms (formerly Facebook), have already pledged their commitment to the EU’s 2022 Code of Practice on Disinformation. Jourova stressed that these companies, along with others, should report on the implementation of new safeguards for AI by July. However, Twitter’s withdrawal from the code of practice has drawn scrutiny, and Jourova warned that the company can expect increased regulatory scrutiny regarding its compliance with EU law.
These developments coincide with the forthcoming EU AI Act, a comprehensive set of guidelines that will regulate the public use of AI and the companies deploying it. While the official laws are expected to take effect in the next two to three years, European officials have urged generative AI developers to adopt a voluntary code of conduct in the interim.
The EU’s proactive stance reflects its commitment to ensuring that AI technologies are used responsibly, with adequate safeguards to combat the spread of disinformation. By placing the onus on companies to label AI-generated content, the European Union aims to empower users to make informed decisions while navigating online platforms.
The implementation of transparency measures for AI tools has broader implications for the global tech industry. As Europe takes the lead in regulating AI, other jurisdictions may be influenced to adopt similar guidelines. This shift could foster a more responsible and transparent approach to AI development worldwide, promoting trust and accountability in the digital age.