The Truth about AI

Jottings on AI

Jo Vertigan, MD of Obidos Consulting, addresses some of the burning questions around morality and accuracy when it comes to AI...

Jo Vertigan
Jo Vertigan
MD | Obidos Consulting

Artificial Intelligence has been on a jounrey for many years

At Obidos Consulting, our work in the AI space has focused on digital harms. For example, we have helped the police understand how computer vision can support criminal investigations, we have reviewed the effectiveness of AI-powered content moderation for a UK regulator, and we have explored whether AI can be used to detect harm in non-verbal communication such as memes and emojis.
At this stage, we don’t have any definitive answers about the use of AI for attention insights. But we do have some important questions that we feel are worth considering.

 

How accurate is it? The conosian paradox

AI is dependent on quality data to “train” its model, but when that data is incomplete (as many data sets are) the patterns found may be misleading. Worse, some AI systems “hallucinate”, making data up if they can’t find it in the data set.
A recent academic study highlighted how there is a self defeating danger in that as an AI model is exposed to more AI generated data the platform will degrade over time, creating more errors as it volumetrically digests more content generated by such tools losing the diverse and rich tapestry of human viewpoints which lay at the heart of the initial training. This creates what I call the Cronosian Paradox, due to the mythical consumption habits of the Titan.

 

How ethical is it?

In the mad rush to use platforms like ChatGPT and Midjourney, we need to think carefully about ethics. An organisation operating AI-powered tools needs to consider whether the outputs of the system are fair or whether certain groups might be disadvantaged (for example by predictive policing) because of a bias in the system.
There are other ethical considerations too: is a senior executive accountable for outputs; is the system being used transparently; can stakeholders understand how decisions are made; and for people negatively affected by an AI’s decision, do they have a clear route to disagree or gain redress? The UK government is proposing an ethical framework for AI that covers these and other issues.

 

Putting in the Guardrails

Aleks Krotosk who does a fantastic podcast called the Digital Human (Digital Human podcast) observed many years ago that we need far more input from a greater range within society and suggesting the days of the physical location of Computer science teams at one end of the campus and the humanities at the other is an outdated way of thinking perhaps its time that the philosophers and ethicists are needed to contribute to this journey, though with a philosophy degree perhaps I am biased.

We have helped and supported a very wide range of clients over more than decade with clients as diverse as The FA, BAE Systems, The Jockey Club, GCHQ, Marshalls of Cambridge and the National Crime Agency. Our strength is in the helping clients understand the intersection between emerging technologies and their organisations, looking at both the opportunities but also the threats such technological change can cause.

“So, yes, let’s embrace this brave new world. But it’s early days: we need to tread carefully and consider not only how we reap the rewards but also how we sidestep the inevitable, and significant, harms that AI may bring”

The Truth About New AI Series