Public trust in artificial intelligence (AI) is on the decline, despite its potential to revolutionize various sectors from healthcare to environmental sustainability. Data from the recently published 2024 Edelman Trust Barometer highlights a worrying trend, with public trust in companies developing and deploying AI dropping significantly in the past five years, falling from 61% to 53%. In the US, the decline is even steeper, plummeting from 50% to just 35%.
This erosion of trust presents a major challenge for the AI industry. Companies like Google, OpenAI, and Microsoft are pouring resources into AI research and development, with promises to solve complex problems and improve our lives. However, the public perception of AI seems to be diverging from these optimistic narratives.
The PR Risks of AI
Despite its transformative potential, AI grapples with a persistent public relations (PR) problem. As businesses increasingly integrate AI into their operations, they encounter a delicate balance between harnessing its capabilities and mitigating the risks associated with public perception.
From misinformation to copyright infringement, AI’s PR journey is fraught with pitfalls. Barely a day goes by, without another major brand having to manage negative stories stemming from their experimentation, and/or deployment, of AI. International companies from Under Amour to Lego, the list of problematic corporate AI experiences continues to grow.
Navigating the AI PR landscape requires ethical guidelines, proactive communication, and a keen understanding of AI’s strengths and pitfalls.
Why the Disconnect?
Several factors contribute to the public’s waning trust in AI:
Misinformation and misperceptions of content
Generative AI tools, such as OpenAI’s DALL-E, have the remarkable ability to create novel content, including images and text. However, this power comes with potential harm. The ease of spreading misinformation through AI-generated content poses a significant risk to businesses and individuals alike.
The ability for businesses to develop imagery has never been easier. However, companies need to ensure that staff understand the potential for misinformation or misperceptions that can be generated together with GenAI content. Where customers feel they have been misled through brands and companies using imagery that fails to match with reality, trust can rapidly be eroded.
Job displacement anxieties
Automation powered by AI is a double-edged sword. While it can increase efficiency and productivity, it also raises fears of widespread job losses.
Where brands are seen to be moving towards the overuse, or leverage, of AI at the expense of their human stakeholders then this can also be damaging to reputations.
Across industries, from film & television, media, sales & marketing, to manufacturing and agriculture, there is not a single industry where the impact of AI is not beginning to be felt. With that impact is coming significant concern regarding job displacement.
The problem that businesses are, at times, failing to understand, is that these stakeholders are ultimately their customers. With consumer trust for AI failing, then ultimately mismanagement of these solutions has the potential to negatively impact the business in the future.
The need to nurture AI-Human co-existence in the workplace has never been more important.
Timeliness and accuracy
In the fast-paced world of PR, timeliness and accuracy are paramount. AI tools like ChatGPT can provide real-time information, but not all AI models are equally up-to-date. Businesses must exercise caution when relying on AI for critical data. Imagine a crisis communication scenario where an AI-powered chatbot disseminates outdated or incorrect information. The consequences could be severe, leading to confusion, customer mistrust, and potential reputational damage. PR professionals must strike a balance between leveraging AI’s efficiency and ensuring the accuracy of the information they share.
Copyright infringement
AI-generated content can inadvertently violate copyright laws. Businesses often use AI to create marketing materials, advertisements, and social media posts. However, if an AI model produces content that closely resembles existing copyrighted material, it can lead to legal battles. For example, an AI-generated logo resembling a well-known brand’s logo could result in a lawsuit. Companies must ensure that their AI systems are trained not to produce plagiarized material and respect intellectual property rights.
Bias and discrimination
AI algorithms learn from existing data, which can perpetuate biases present in that data. Deploying biased AI models can have serious consequences. Imagine a company using an AI chatbot that inadvertently exhibits gender or racial bias. Customers who experience discriminatory interactions may publicly voice their dissatisfaction, leading to negative publicity and loss of trust. To mitigate this, businesses must rigorously test AI systems for bias and actively work to reduce discriminatory outcomes.
Lack of transparency about AI usage can erode trust
When businesses fail to disclose that AI powers certain processes (e.g., chatbots), customers may feel deceived. Transparency is essential to maintain credibility. For instance, if a customer interacts with an AI-driven virtual assistant without knowing it, they may question the authenticity of the communication. Clear communication about AI’s role and limitations is crucial to building and preserving trust with stakeholders.
Ethical missteps
Several high-profile cases of AI bias and algorithmic discrimination have shaken public confidence. For instance, Amazon’s facial recognition tool was found to have racial biases, leading to calls for stricter regulations.
The Role of Responsible AI in Mitigating PR Risks
The good news is that there’s a growing movement advocating for Responsible AI development and deployment. This approach emphasizes fairness, accountability, and transparency in AI systems. Organizations adopting a positive responsible AI framework ensure that they are better placed to avoid some of the potential PR risks associated with mis-managed AI deployments.
By adopting organization-wide Responsible AI policies, businesses can ensure that they limit their potential exposure to AI generated risks, while culturing a symbiotic relationship between corporate automation and human factors:
Embedding Fairness in Design
AI algorithms should be trained on diverse datasets to minimize bias. Efforts should be made to identify and mitigate potential biases throughout the development lifecycle.
- Data Diversity: The data used to train AI models is the foundation upon which decisions are made. It’s crucial to ensure these datasets are diverse and representative of the real world. This may involve actively seeking out data that reflects different demographics, ethnicities, and genders. Techniques like data augmentation can also be used to artificially expand dataset diversity.
- Algorithmic Auditing: Regularly auditing AI models for bias is essential. Techniques like fairness metrics and bias detection tools can help identify potential issues before deployment.
- Human Oversight: While AI can be powerful, human oversight remains crucial. Implementing safeguards like human review loops for critical decisions can help mitigate bias and ensure fairness in the outcome.
Increased Scrutiny and Explainability
Regulatory frameworks and clear guidelines are needed to ensure that AI systems are auditable, and their decision-making processes can be explained. Moving towards greater explainability requires:
- Explainable AI (XAI) Techniques: XAI methods such as Local Interpretable Model-Agnostic Explanations (LIME) can help explain how an AI model arrives at a particular decision. By making these explanations accessible, even to non-technical users, trust and understanding can be fostered.
- Standardized Reporting: Developing standardized reporting mechanisms for AI systems can provide valuable insights into their performance and potential biases. This information can then be used to improve the models and demonstrate responsible development practices to the public.
- Stakeholder Engagement: Open dialogue with stakeholders, including regulatory bodies, industry experts, and the public, is crucial. By fostering open communication and actively addressing concerns, the AI industry can build trust and ensure responsible development practices are at the forefront.
Transparency and Communication
Open communication about the capabilities and limitations of AI can help manage expectations and address public concerns.
A study among 2,000 U.K. consumers, conducted for PRWeek by YouGov, conveyed that consumers are concerned about AI and want brands to convey when AI is being used. Also, sharing this transparency with organization leaders, clients, employees and other audiences demonstrates a commitment to innovation.
PR professionals should advocate for transparency with AI adoption for content development, chatbots and other uses.
Building a Future with Trust
AI has the potential to be a powerful tool for positive change. By prioritizing Responsible AI practices, companies can rebuild public trust and ensure that AI development is aligned with ethical principles. This collaborative effort between industry, policymakers, and the public is crucial to usher in an era of responsible AI that benefits everyone.
This content was generated with the assistance of AI tools. However, it has undergone thorough human review, editing, and approval to ensure its accuracy, coherence, and quality. While AI technology played a role in its creation, the final version reflects the expertise and judgment of our human editors.
ethicAil – Building Trust in AI