Episode 4

Expert-led podcast discussing trending articles, and news in the AI and attention spaces.

Intro_square (1)
 

 

Podcasts led by experts

AI smarter than humans?

Mask group-1

AI has taken the entire world by storm, whether you live under a rock or are high up in the tech world, you know about AI. How much you know is the issue which is what the AI Policy Institute discovered when they instructed YouGov to conduct a survey of 1,118 Americans. 

They found that 63% of respondents say that regulation should actively prevent AI superintelligence. It’s a big conversation, one that requires some top-level insights.  

Chief Scientist, Dr.Hamit Soyel joined Jen Clinehens, consumer behavior expert and MD/Founder of Choice Hacking (the Agency) share their thoughts on the topic. 

Let’s jump in! 

Q: From a behavioral and neurological perspective, why do you think that there is a significant disconnect between public sentiment, as revealed by this article, and the goals of major AI companies in pursuing super-intelligent AI. What cognitive and emotional factors might be contributing to this? 

Jen: In people's minds, when you ask them about super-intelligent AI, almost everyone is going to pull up examples like Skynet, like Terminator, these very vivid sort of emotional examples that they know from pop culture. And that speaks to a cognitive bias called availability bias. Availability bias basically says that when you've got these really vivid or recent memories, that's what you base your behavior or your opinions on. An example might be, you see on the news that somebody has been killed in a shark attack in Australia and you're in America. All of a sudden, you're really hesitant to go in the ocean. I mean, it's not more or less likely that you're gonna get attacked by a shark at any point, but it's very recent in your memory. It's very vivid, it's very emotional, and therefore you base your behavior on it. While it is a little glib to say Skynet and AI are the same things, I think to a lot of people, that's the best example they can think of when you ask them about something like a super-intelligent AI. They think, oh, well, it must be bad because we just don't have very many vivid examples or emotional examples of AI being positive in people's lives. I think a big part of it related to that. You have things like loss aversion. A lot of the conversation around this general intelligence AI is it's going to take everybody's job. Despite the fact, you like your job or don't like your job, you don't want someone to take it from you. You don't want anyone to take anything from you. The last one I would say is a principle called zero risk bias, which basically says we don't like things that are uncertain. We only really want to take action based on a very certain outcome. And when you talk about AI, I think uncertainty is there not just for the general public, but also for the people that work in the AI field. They're not really sure where things are going to go. 

Hamit: I can add confirmation bias. That could also contribute to the divide. Because people seek information that confirms their beliefs, and then they want some reinforcement from that. And then some negative views can appear, which could be due to this limited technical knowledge. of some people. There may be some unrealistic fears and skepticism around, and then they might feel that, okay, my peers are feeling this way. Should I confirm what they believe with my beliefs and be on their side and then trigger some negative views? And then also maybe some ethical concerns also could kick in and say that, okay, because you know, in the US there were problems where people believe that policies act differently on different ethnicity. They may feel that, okay, if the AI is going to make a decision, will it going to have such biases as well? So that could trigger problems as well. So yeah, I think cognitive biases and confirmation biases are the main problems. 


Q: The development of AI is often depicted as a technological inevitability by some, while some argue it should be a deeply political and democratic decision. How can our understanding of human behavior and neuroscience help us navigate the complex debate about the role of public input in shaping AI's future? And are there any biases here? 

Hamit: I believe fear is the first because people feel that AI could be risking my job, risking my benefits, or my support. People may, and these social dynamics can, as the previous example I gave, like peer pressure could influence those issues as well. Balancing the tech progress and oversight of public engagement in a transparent way is extremely needed because we have to listen for diverse perspectives and then if you're going to progress in this, the ethics should be there, the biases should be equal to everyone and there should be more involvement by the experts in the public discussions. There should be a governance structure which is regulating all this stuff. Because I'm not worried about the current state of AI, but if there should be some kind of fear in me, it should be the AI created by AI. I know we are not there yet, but yeah, that is why there should be all these dynamics and linking back together if you are talking about public input in AI's future. 

Jen: I think Hamit makes some really good points. I do think there's this psychological tendency you see in people, and it's something called reactance. A certain percentage of the population, if they feel like something is being forced upon them, like seatbelt laws or helmet laws or COVID vaccines or mask mandates, their initial sort of response to that is to say no, not because it's not a good thing, but because it just is their initial instinct. Whether there's further thought about whether that's the right thing to do or not, there is gonna be a certain percentage of people that just say, I'm just not gonna do what you asked me to. I certainly think having a voice and a choice will become increasingly important as we start to see what AI can actually do because it's still very, very early days. And I think to Hamid's point, a lot of lay people just don't understand the implications of letting something out of the box. 


Q: In the context of AI development, the concept of permissionless invention has been prevalent in the tech industry. How can behavioral science and neuroscience shed light on the ethical and psychological aspects of seeking societal buy-in for transformative technologies like AI? And what insights can we gain about the balance between that progress and democratic oversight in this specific context? 

Jen: I think when you talk about buy-in for anything, it goes back to the reactant point, right? People want to feel like they've been heard like they have an opportunity to give some input. I would say co-creation is a big part of any kind of buy-in, right? Whether you're talking about pitching an idea to someone who needs to buy it, whether you're talking about your Ikea furniture and you want people to feel more bonded with your brand, I mean, you look at behavioral science principles like something like the Ikea effect. IKEA is not named after the IKEA effect. The IKEA effect is named after IKEA, the store. But it's this idea that if you co-create something, you become more bonded to it. You're willing to pay more money for it. You feel like you've invested a bit of yourself in the actual end product. So, therefore, you have a lot more affection towards it. If we want to talk about generalized AI intelligence, we can maybe back up a little bit and think about some of the examples we've already had of artificial intelligence engines that have been built, but have been built in a biased way, not consciously, but that the folks that have put together these models have an existing bias and there are a million of different ways that could go. It gets accidentally baked into the artificial intelligence. A good example would be, the issue that Amazon had a few years ago, where they had mostly white male engineers who built an HR AI tool, and it was going through all of these different resumes. It started to favor people from specific schools, people of a specific gender, and people of a specific cultural background. It just happened that it wasn't a very diverse group of people that built this artificial intelligence engine and that was reflected then or magnified in a way by the outputs from this particular engine. I definitely think that when it comes to developing these tools and sort of the future of where things go, yes, we need people's input.  

Hamit: What I could add is this new term neuroethical research, maybe not neuroscience, but we can combine them together. So neuroethical research could explain how our brains are actually processing the ethical dilemmas. Then I believe that should be feedback to the next generation AI. We always worry about producing something cleverer than us. We have to look at how humans are doing, solving these ethical issues, or dilemmas. Then those need to be fed back into these new generation AIs or new AIs that will be with us in the future. And then I'll come back to public engagement because that's something I have to work on. There should be more public engagement done about this and there should be more transparency. As Jen said, with this diverse perspective from the co-creation, we're going to build our future. We have to come back and listen to everyone out there and how they feel about it. And coming back to ethics, I believe neuroscience studies could come in as well by looking at our emotional responses when we are making decisions, especially those moral decisions. Then look at how we understand fairness or our fairness perception and if we can model trust as well.  


Q: We heard the example of the bias with Amazon engineers. Dragonfly AI is unbiased. Can you explain a little bit as to how it is unbiased? 

Hamit: As you know we make decisions in two different approaches. We have a bottom-up that we look in our area and there's also top-down. That top-down changes from our cultural background and then all our previous knowledge and everything. That can change from person to person. But when you look at Dragonfly, it solely uses the bottom-up.  

Attention, which is universal to most human beings. Obviously, if you have problems with your eyes, it will be slightly different, but if you are assuming that everyone has normal vision, not necessarily 20/20, but normal vision, things will be universal for everyone.  

The beautiful thing about Dragonfly AI is it's a biological model that analyzes how we humans are actually seeing the world. It comes back to what I explained about the next generation. I think we have to understand how the real world is working, and how we trust each other because these ethics or biases are already in there in normal society. If we can't solve it with a nice governance, it will not be able to be done by the AI as well. As humans, we need to solve it first. I'm pretty sure that AIs following us will be more robust and safer because they're gonna do the calculations and then human error will disappear. They're gonna remove that uncertainty and then they're gonna make conservative decisions and ensure our safety. 


Q: If you were in front of just over 1100 Americans who've obviously shown that they are a little bit terrified of the super AI, what would your reassurance be to them, given your experience in the industry and what you've seen? 

Jen: I would say, first of all, it's probably not going to be Skynet. Hopefully, it's not going to be a killer robot that destroys humanity. I do think that having concerns about AI and where it's headed, how powerful it could become, and the hold it might have on everyday life, I would acknowledge that those are the fears that are rational to a certain point. I would also say that we're in very early days. of artificial intelligence. I think that being conscious of things like input from the public bias that might be baked into some of these tools and engines is a good positive first step. Having these concerns around the ethics, and diversity in the models, like making sure that we're aware of issues that we have as humans and we're attempting to solve those before we just run off and create an AI that's gonna take our human foibles and magnify them is a very positive first step. 

Hamit: Imagine that you're expecting a decision to be made about something that you're into. Who do you trust? Do you trust humans doing it when there could be some mistakes made? Or do you trust an AI to provide support to the human who's going to make the decision where they can access and calculate every possible output out there, which can access and look? previously made decisions and provide you feedback and then humans can use that and make a decision. I believe they're gonna pick the one as the second option rather than the first option because I mean we as humans have trust issues with humans as well.  You might have some trust issues. Then you have to be vocal. You have to let this be known and you should ask for evidence that those ethics and all these policies are out there and have been controlled by governance which assures the public that things will be all right. But they don't need to worry about anything. I might be worried if an AI creates an AI, that's the alarming point for me, but we are far, far away from that. 

 

What a fantastic episode with Hamit Soyel and Jen Clinehens, so many interesting insights and key learning points from both of them!  

Don’t forget to subscribe on LinkedIn or follow on Spotify!

 

Next month we are hosting a retail innovation and insights expert with over 25 years of experience, she is also the co-author of two bestselling books and an industry knowledge leader.