
In an era marked by the unprecedented surge in generative artificial intelligence (AI) tools, such as ChatGPT, Bard, and Midjourney, a recent report unveils a growing chasm of distrust between consumers and companies utilizing this cutting-edge technology. With consumers increasingly concerned about data security risks, unethical practices, and biases, the spotlight is now on the ethics, transparency, and safety of AI development.
The “State of the Connected Customer” report, recently released by Salesforce, stands as a stark testament to the prevailing skepticism among consumers. Drawing insights from over 14,300 business buyers and consumers, the survey underlines that only a mere 13% of consumers hold complete trust in companies’ ethical use of AI. On the flip side, a significant 10% harbor complete distrust in the deployment of generative AI by corporations.

Source: Salesforce survey
As the generative AI landscape continues to evolve at breakneck speed, the Salesforce study underscores the palpable concerns surrounding the technology. Data security, ethical application of AI, and potential biases are key worries. Notably, a resounding 89% of consumers emphasize the importance of being informed whether they are interacting with a human or AI entity. Moreover, a substantial 80% advocate for human validation of AI-generated outputs, emphasizing the critical role humans must play in the loop.
In a bid to assuage these concerns, significant players in the AI domain, including tech behemoths like Google and OpenAI, have pledged to uphold safe and transparent AI development. These commitments gained further momentum with their endorsement of data protection, reinforcing the industry’s collective acknowledgment of the paramount importance of consumers’ trust.
Paula Goldman, the Chief Ethical Officer at Salesforce, provided a cogent perspective on bolstering consumer trust. She emphasized that beyond just data collection, the focus should extend to safeguarding customer data and trust. Goldman highlighted that robust methodologies should be instituted to ensure the sanctity of data and, by extension, customer confidence.
However, this renewed emphasis on ethics and transparency is not confined to the private sector. In the realm of legislative action, US Senators proposed bipartisan AI bills in June, advocating for human oversight in consequential decision-making and demanding clear disclosure if AI is deployed for public interactions. The UK, too, is poised to publish an AI regulation white paper, with plans to host a global summit on AI regulation later in the year.
Yet, amid these steps toward reassurance, the specter of a concerning trend emerges. The integration of generative AI, such as ChatGPT, has seen a remarkable rise in adoption by businesses aiming to enhance productivity. However, a disconcerting pattern has emerged, with a significant number of employees inadvertently divulging sensitive company data to these AI bots.
Cybersecurity firm Group-IB’s research report accentuates this alarming situation, revealing that compromised ChatGPT credentials available on the dark web have surged past a staggering 100,000. The data points to an unsettling record high of 26,802 compromised accounts in May 2023 alone, underscoring the need for stringent data protection measures.
While the generative AI landscape promises innovation and efficiency, it is the increasing emphasis on ethics, transparency, and data security that demands immediate attention. As the generative AI boom surges forward, bridging the trust gap remains a pivotal challenge—one that necessitates not only technological advancements but also ethical imperatives.
Amid the ongoing discourse, the International Labour Organization (ILO) recently released a study examining the potential impact of generative AI on jobs. While the report suggests that the workforce might not experience a dramatic upheaval in the short term, certain sectors, such as administrative and customer service roles, could witness substantial transformations.
The ILO’s study brings to light the susceptibility of various job tasks to automation, indicating that 24% of clerical tasks are highly exposed to automation, with an additional 58% facing medium-level exposure. Roles such as typists, travel consultants, and bank tellers, among others, are at the forefront of automation risk. Intriguingly, the report highlights that women, disproportionately represented in administrative positions, could bear the brunt of job displacement due to the adoption of generative AI.
As the curtain rises on this new era of AI advancement, society finds itself grappling with multifaceted concerns. Balancing innovation with ethics, and harnessing the potential of AI while preserving consumer trust, remains an intricate dance—one that will shape the trajectory of technology and humanity’s relationship with it.