Responsible AI

OpenAI's Teddy Bear Ban Signals a Critical Juncture for AI Toys

Why OpenAI’s Teddy Bear Ban Signals a Critical Juncture for AI Toys

The recent decision by OpenAI to suspend toymaker FoloToy from using its ChatGPT models, after its AI-powered teddy bear was found dispensing highly inappropriate and dangerous advice to children, has sent a stark warning to an industry – worth $445 Billion annually – that relies on consumer trust. The OpenAI decision follows a string of […]

Why OpenAI’s Teddy Bear Ban Signals a Critical Juncture for AI Toys Read More »

The Real Question for AI in Regulated Industries - How to Make It Responsible, Reliable, and Compliant

The Real Question for AI in Regulated Industries: How to Make It Responsible, Reliable, and Compliant

Key topics in this article… In high-stakes, regulated industries, the promise of artificial intelligence (AI) is undeniable — faster decisions, deeper insights, and greater efficiency. But when lives, compliance, and critical operations are on the line, the question isn’t whether to use AI, but how to use it responsibly. Cutting-edge algorithms and powerful language models

The Real Question for AI in Regulated Industries: How to Make It Responsible, Reliable, and Compliant Read More »

UK DWP Halts AI Projects Over Transparency Concerns

UK DWP Halts AI Projects Over Transparency Concerns

The UK’s Department for Work and Pensions (DWP) has recently discontinued several AI projects amid growing concerns regarding transparency, data privacy, and responsible AI implementation. This decision comes at a time when the UK government is actively promoting AI advancements through its AI Opportunities Action Plan, highlighting a significant tension between innovation and ethical governance.

UK DWP Halts AI Projects Over Transparency Concerns Read More »

Why Trust and Reliability are Critical to AI Adoption in the Workplace

Why Trust and Reliability are Critical to AI Adoption in the Workplace

New research has found that lack of trust and doubts about reliability are two of the most significant challenges facing AI adoption among workers. Highlighting significant concerns that need to be addressed for organisations to reap the benefits of AI. The YouGov research surveyed more than 2,100 working adults in the U.S. and UK in

Why Trust and Reliability are Critical to AI Adoption in the Workplace Read More »

The Urgent Need for a Responsible AI Program - Why Fewer Than 2% of Companies are Fully Prepared

The Urgent Need for Responsible AI: Why Less than 2% of Companies are Fully Prepared

As Artificial Intelligence rapidly transforms industries, many organisations are exploring its potential to reshape business processes and boost efficiency. In a recent report, Accenture found that 90% of companies are actively investigating AI or generative AI capabilities. However, the findings also indicate a critical gap in AI readiness, with fewer than 2% of companies investing

The Urgent Need for Responsible AI: Why Less than 2% of Companies are Fully Prepared Read More »

Study Reveals AI Models' Growing Tendency to Guess Rather than Admit Ignorance

Study Reveals AI Models’ Growing Tendency to Guess Rather than Admit Ignorance

A recent study published in Nature has revealed that as large language AI models (LLMs) become more sophisticated, they seem increasingly prone to overconfidence. Leading to newer, larger language AI models becoming less likely to admit their ignorance. Researchers from the Universitat Politècnica de València subjected the latest versions of BigScience’s BLOOM, Meta’s Llama, and

Study Reveals AI Models’ Growing Tendency to Guess Rather than Admit Ignorance Read More »

European Commission’s New AI Watchdogs Appoints Expert Panel for Code of Practice

European Commission’s New AI Watchdogs Appoints Expert Panel

The European Commission has announced the appointment of 13 experts who will be responsible for drafting a Code of Practice for General Purpose Artificial Intelligence (GPAI), under the AI Act. The purpose of the code is to serve as a guideline for companies developing and deploying generative AI models like ChatGPT and Google Gemini. Implementing

European Commission’s New AI Watchdogs Appoints Expert Panel Read More »

Brazil Suspends Meta's AI Privacy Policy Amid Concerns Over Data Use

Brazil Suspends Meta’s AI Privacy Policy Amid Concerns Over Data Use

Brazil’s National Data Protection Authority (ANPD) has taken a decisive step in the ongoing global debate over data privacy and artificial intelligence. As reported by Reuters, the ANPD has suspended Meta’s new privacy policy, which allowed the use of personal data to train generative AI systems. This move highlights the increasing scrutiny technology companies face

Brazil Suspends Meta’s AI Privacy Policy Amid Concerns Over Data Use Read More »

AI has a PR Problem - Building Trust in an Age of Automation

AI has a PR Problem: Building Trust in an Age of Automation

Public trust in artificial intelligence (AI) is on the decline, despite its potential to revolutionize various sectors from healthcare to environmental sustainability. Data from the recently published 2024 Edelman Trust Barometer highlights a worrying trend, with public trust in companies developing and deploying AI dropping significantly in the past five years, falling from 61% to

AI has a PR Problem: Building Trust in an Age of Automation Read More »

British Businesses Told EU AI Law Likely Covers You, So Innovate Responsibly_Pexels

British Businesses Told EU AI Law Likely Covers You, So Innovate Responsibly

The UK recently released its “guidance” for governing AI development and deployment, emphasizing a light-touch, outcome-based approach. However, legal experts warn that compliance with the upcoming EU AI Act will likely satisfy UK requirements for most businesses. The EU AI Act, proposing a tiered system of bans, restrictions, and safeguards for AI, gained significant backing

British Businesses Told EU AI Law Likely Covers You, So Innovate Responsibly Read More »

Scroll to Top