New research has found that lack of trust and doubts about reliability are two of the most significant challenges facing AI adoption among workers. Highlighting significant concerns that need to be addressed for organisations to reap the benefits of AI.
The YouGov research surveyed more than 2,100 working adults in the U.S. and UK in November 2024 who use digital devices for their jobs, in a bid to discover attitudes toward AI adoption within the workplace. Of those questioned, 30% did not trust the accuracy of AI-generated responses that they received.
The attitudes of the workers surveyed highlight broader concerns about the trust required for AI solutions to truly deliver value. These challenges not only impact companies providing AI solutions but also extend to businesses aiming to realise the benefits of these technologies. As organisations increasingly adopt or plan to integrate AI-driven tools—such as large language models (LLMs) and AI agents—into their operations and services, building trust and ensuring reliability will be essential for achieving long-term success.
Interestingly, more than half of those surveyed (58%) were already using AI agents for various tasks, which equally demonstrates the rapid adoption that is occurring in workplaces across all industries.
Workers Highlight Trust and Reliability as Key Concerns Relating to AI Adoption
The results of the study conducted for Pegasystems, a software development, marketing, and licensing company, highlight the persistent concerns that workers using AI have when it comes to adopting the technology:
- 33% of those surveyed were worried about the quality of work that is produced by AI.
- 47% said there is a lack of human intuition and emotional intelligence.
- 30% did not trust the accuracy of AI-generated responses that they received.
- 34% worried that AI-produced work isn’t as good as their own, which is tied to concerns about the technology’s accuracy and reliability.
“This research shows that many still have reservations [about the technology], and it’s up to enterprise leaders to strategically and thoughtfully incorporate the technology to help ensure adoption,” said Don Schuerman, chief technology officer of Pegasystems, in a statement provided by the company’s public relations representative.
Why Trust Matters in AI Adoption
Trust in AI systems encompasses multiple dimensions, including accuracy, fairness, transparency, and accountability. Workers and companies alike need to feel confident that AI solutions will deliver consistent and unbiased results without introducing new risks. Reliability is equally essential because businesses depend on AI tools to perform critical functions, from data analysis to customer service, and any failure can disrupt operations and damage corporate credibility.
In comments for a Forbes article, attorney Jonathan Feniak suggested that, “Until AI can guarantee ethical decision-making and 100% compliance, every business should adopt it carefully, audit it rigorously, and override it as needed. We can’t just let AI run our businesses for us, but we can certainly use it—with strict monitoring—to augment our efforts.”
Examples of Trust Issues in AI Outputs
The past couple of years have seen many incidents where AI outputs have led to significant trust issues for those businesses attempting to roll out AI-driven services. In multiple cases, recruitment AI systems have been found to perpetuate biases by favouring candidates based on gender, or race, due to biased training data. Similarly, AI tools used for performance monitoring incorrectly flagged employees as underperforming, creating unnecessary tension and mistrust among staff.
In other cases, the adoption of AI-powered chatbots have presented an array of instances where the technology has provided inaccurate or inappropriate responses to customers, leading to reputational damage for businesses.
These examples demonstrate the importance of ensuring that AI systems are transparent, unbiased, and rigorously tested to avoid eroding trust within organisations.
Developing Corporate Responsible AI Policies
A well-designed AI policy and process should define organisational expectations and designate individuals or teams responsible for enforcement. It must address wide-ranging concerns, including copyright infringement—whether the company is using copyrighted materials or inadvertently violating others’ rights—as well as potential harms or losses to stakeholders. To be effective, the policy should incorporate detailed methodologies for handling these issues. Additionally, ongoing communication and training programs are essential to ensure that all employees, from executives to staff, are informed about the policies and equipped to implement them effectively.
Steps Companies Can Take to Build Trust and Ensure Reliability
To address the challenges of trust and reliability, companies can take proactive measures to ensure that their AI solutions meet necessary standards:
- Transparency and Explainability: Companies should ensure that AI systems are transparent and provide clear explanations for their decisions and actions. This involves documenting the underlying data, models, and algorithms to allow for auditability and greater understanding by non-technical stakeholders.
- Rigorous Testing and Validation: Before deploying AI tools, businesses should perform rigorous testing to ensure that the models are reliable and unbiased. This includes testing on diverse datasets to ensure fairness across different demographics and scenarios.
- Ethical Guidelines: Develop ethical AI guidelines to ensure that the technology aligns with the company’s values and avoids causing harm. Ethical AI frameworks can guide the design, deployment, and use of AI tools.
- Regular Audits and Monitoring: Implement regular audits to monitor AI systems for accuracy, bias, and performance. Continuous monitoring ensures that the tools remain reliable and can adapt to changing data environments or operational needs.
- Training and Education: Provide employees with training on how AI works, its limitations, and best practices for integrating it into their workflows. This can help address misconceptions and foster greater AI trust and adoption among workers.
- Human Oversight and Accountability: Maintain a system of human oversight where employees can intervene or override AI decisions when necessary. This ensures that final accountability rests with human professionals rather than automated systems.
Implementing and Monitoring Trust Strategies
Successfully implementing these strategies requires organisations to take a structured approach:
- Define Clear Goals: Set specific objectives for what the AI system should achieve and ensure these align with broader organisational priorities.
- Assign Responsibility: Designate a Responsible AI Officer or team to oversee implementation, compliance, and updates to AI-related policies.
- Use Metrics and KPIs: Establish key performance indicators (KPIs) to measure trust and reliability, such as accuracy rates, error margins, and user satisfaction scores.
- Encourage Feedback: Create channels for employees to report concerns or provide feedback on AI systems, fostering a culture of continuous improvement.
- Iterate and Adapt: AI systems and policies should be reviewed and refined periodically to address new challenges, incorporate emerging best practices, and adapt to evolving regulatory landscapes.
By adopting these steps, companies can foster an environment of trust and reliability, ensuring that AI technologies deliver their full potential while addressing worker concerns. In doing so, they can position themselves as leaders in the AI-driven future.
This content was generated with the assistance of AI tools. However, it has undergone thorough human review, editing, and approval to ensure its accuracy, coherence, and quality. While AI technology played a role in its creation, the final version reflects the expertise and judgment of our human editors.