
US Securities and Exchange Commission (SEC) chair Gary Gensler has issued a stark warning about the potential dangers of artificial intelligence (AI) in the financial markets. His research points to a worrisome trend of existing financial laws granting AI developers excessive freedom to manipulate markets, posing a threat of an imminent financial collapse.
Gensler’s primary concern revolves around the opacity of AI’s decision-making, particularly in the realm of deep learning algorithms, which may lead to market instability and volatility as firms vie for the highest client returns.
“Gensler’s concerns regarding AI’s impact on financial markets are not unfounded,” warns San Francisco-based attorney Mark Perlow. While acknowledging the transformative potential of AI, Gensler emphasizes the pressing need to address the accountability gap in financial decisions made by AI systems.
The SEC Chair’s interest in AI dates back to 1997 when he witnessed a computer defeat a Russian chess champion. The topic resurfaced during his tenure as a teacher at the Massachusetts Institute of Technology in 2019. In a 2020 joint-research paper titled ‘Deep Learning and Financial Stability,’ Gensler underscored the dangers of unchecked AI development, particularly deep learning algorithms, where developers have the freedom to curate seemingly ‘objective’ AI functions that might compromise market ethics.
The spotlight on AI’s impact intensified after consulting firm McKinsey released a recent report projecting AI’s transformative potential in the job market. According to McKinsey, AI could revolutionize employee working time, potentially accounting for 60-70% of work hours by 2045. This forecast marks a significant increase from previous estimations, underscoring the rapid advancements in AI technology and its potential impact across industries.
In other spheres, the US military is actively exploring AI’s capabilities for military tasks, with promising results. Large-language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard have demonstrated efficiency and success in handling sensitive information, raising confidence in AI’s potential.
Yet, security breaches in AI have emerged as a significant concern. As businesses increasingly rely on AI bots like ChatGPT to streamline operations, inadvertent exposure of sensitive company data has become a growing problem. A recent report from cybersecurity firm Group-IB reveals a staggering 100,000 compromised ChatGPT credentials available for sale on the dark web, signaling the need for robust security measures to protect sensitive information in the AI-driven landscape.
Gensler’s warnings about AI’s potential to destabilize financial markets highlight the urgency for a well-regulated AI landscape. As we delve into the transformative possibilities of AI, striking a balance between harnessing its capabilities and implementing safeguards against potential misuse and security breaches is crucial. Responsible AI development, along with proactive measures to protect sensitive data, will be pivotal in unlocking AI’s true potential while mitigating potential risks, paving the way for a prosperous and secure future.