Image.jpg

Hello

Welcome to VIA.

In order to move forward we have to take many steps. I see life as a series of via points.

So that’s why I created VIA. A collection of content designed to share with you what I have discovered about personal development. To focus on our own growth we need to consciously plan those steps and I hope that this content can help you do exactly that

Interrogate, Interpret, Innovate: The Human approach to AI

Interrogate, Interpret, Innovate: The Human approach to AI

AI is growing. In the last 12 months it’s been an increasingly popular topic and one where people have been exploring what they can do with AI. But more recently I’ve started to wonder if this is the right question. Whilst AI’s capabilities are likely to become increasingly advanced and widespread, what are we doing to ready ourselves to work with the AI of the future?

AI cannot think. It may appear clever, but it lacks intelligence. It works by identifying and replicating patterns but is incapable of truly original thought. If someone came for a job interview with these credentials you wouldn’t hire them. Except that’s exactly what we do when we deploy AI in the workplace. Only this time we have the luxury of not having to pay a salary or find them a desk. But much like any human employee we should ensure their role is clearly defined. And we should work on our own abilities to form a healthy relationship with AI.

Does this tip over into robot-world? No. I’m not suggesting we think of our relationship in the same way we connect with other humans. But we should utilise some of the same skills. And a vital skillset to consider is trust.

David Wilkinson, Editor-in-chief of the Oxford Review, has explored this area comparing AI and human to human levels of trust. The “Trust Index” sees us look at trust on a scale of 0-1. When interacting with another human a typical trust index score is around 0.65. This can increase or decrease depending on many factors. Bias, likeability, tenure for example, but on the whole, this is our starting point. So how does this compare to AI? The trust index score for a human using AI is 0.9. We’re hardwired to trust AI technology, despite our more rational awareness of its flaws, over and above human interactions. I found this pretty shocking.

Perhaps even more staggering is that AI’s trust in its own ability is as high as 0.99.

Placing such high levels of trust in a tool is not wise. Suggestions that it can shortcut thinking may be true, but it cannot replace it. An over-reliance on AI solutions could lead to us building houses on foundations of sand. But to suggest that we will not use AI as a prompt to further thinking would be naïve. AI tools are accessible, cheap and rapid approaches to our (current and) future ways of working. This is why we need not ask what work AI can do for us, but what work we can do with AI.

In the “Human Advantage” episode of “Brainy podcasts” David suggests we need to explore more meta cognition when using AI. In short, we need to think about our thinking. Relying on AI to find solutions without applying the appropriate interrogation, interpretation and innovation is, in my view, not using AI to its full advantage. AI can provide an array of ideas, but to create solutions AI needs to be channelled through the human brain. Alongside initiatives to upskill in the use of AI, we need to teach critical thinking.

Critical thinking is a term that I used to only associate with academic work. It is of course highly encouraged in universities where good quality work requires robust data sources and the ability to assess approaches from multiple angles. Being able to weigh up pros and cons, look at all sides of the coin and then use this critical thinking to reach a solution enhances the work of academics. But outside of studies, it is a healthy approach in all areas of life. Applied to AI, it allows us to challenge our pre-built trust in technology and use our more advanced cognition to truly decide the right approach. We need to question the data behind AI. We need to decide for ourselves what it truly means and then take our thinking forward to meaningful, robust solutions. Interrogate, interpret and innovate.

When working human to human, enhancing our emotional intelligence abilities can directly lead to better relationships. Can applying the same principles also pay dividends with AI?

Showing a self-awareness of our own strengths and vulnerabilities can help us identify where the risks of over-reliance on AI may be. Before we use AI, we should consider that the greater our understanding of a topic, the greater our ability to interpret and innovate. And this should determine the role of AI as an initial spark of inspiration. But where we are weak in a topic, we must increase our abilities to interrogate to avoid taking AI at face value. This may include collaboration with, dare I say, more humans not just more technology.

Much like emotional intelligence principles, we should also seek to understand the AI tools we use. The more we know about how AI works the better we’ll be able to understand how to get the most out of it. This doesn’t mean we have to drown ourselves in complex technology speak, but it also doesn’t mean we stay in the shallow end either. If you hired a person you’d get to know how they worked. We need to be brave enough, and responsible enough, to do the same with AI.

AI will continue to create new ways of working, and great successes, but it won’t be without its pitfalls. Whilst many fear that AI will take roles away from humans, I believe that now, more than ever, is the time to invest in the human abilities that AI simply cannot replicate.   

VIA View: Awakened Leaders

VIA View: Awakened Leaders

Why Gene Kranz's speech may still be the best example of leadership decades after Apollo

Why Gene Kranz's speech may still be the best example of leadership decades after Apollo