Smarter Strategies for better Iteration

Expert-led podcast discussing trending articles, and news in the AI and attention spaces.

Intro_square (7)

 

Webcasts led by experts

Smarter Strategies for better Iteration

Effectively the introduction of AI equal's speed.  

But to iterate in ways that are valuable, focused, and meaningful, you need a little more particularly when it comes to the DTC model. 

Product Leader Ryan Smith, and Chief Product & Technology Officer James Harvey share their experience with us on Game of Attention. 


Q: How has AI transformed the approach to experimentation and iteration within the DTC model and CPG industries? 

James: This is a big part of our journey with Dragonfly. AI is transforming the way we experiment and the way we test. We're quite focused on testing content, creative assets and creative effectiveness. If we look at the traditional methods that came before, a big reliance on panel -based testing, eye tracking, which are all really powerful ways to gather data. 

Where that may be limited, the kinds of customers we work with is, it can take quite a long time to gather that data, to pull it together, get the insight. It's quite difficult to iterate quickly because of that and it is quite time consuming and costly. What AI is starting to do is augment traditional methods to really help accelerate testing and help customers and businesses test more content faster. 

It's not always a case where AI is going to be a silver bullet and have all the answers, but I think its ability to augment traditional methods and increase velocity alongside those traditional methods is a big trend that we see. 

During the pandemic, you saw a big shift towards D2C and we work with quite big CPG clients. And I think suddenly the need to produce a lot of content across a lot of different markets puts a lot of pressure on teams internally.  Bringing new modern methods to the creative process in a new channel has been something that we've seen gain a lot of traction recently.   

Ryan: Many people are focused on the end of funnel costs of acquisition and their Facebook ads. But when they're adjusting to AI methods or building all their content in 3D, they're actually able to use that content in manufacturing and prototyping for design reviews all the way through having a universal source of truth for all their media. Then they can render out scenes, lifestyle imagery, banner images, product images, videos, et cetera. There's quite a significant multiplier effect by having a new tooling up and down the digital supply chain. It starts with that return on investment. I want to be able to convert my sales as efficiently as possible, especially with e -commerce, but slowly over time teams start to see I'm doing something once that I could be getting five times or 10 times multiplier up and down my company, particularly in the D to C, use case.  

James: There's a real compound effect of those short, quick testing iteration cycles. The more rapidly you can test, the more you can compound the learnings and build on that. If we look at market research, AI is also transforming the way we think about testing populations, demographics. We've got the rise of ideas like synthetic participants and trying to scale representative information about audiences and what's going to fit. It's quite interesting to see the dynamic of what AI is bringing to the table and how that's maybe sitting alongside methods that are tried and tested and over the years that teams rely on. 

Ryan: At Shopify, I had the privilege of having user experience researchers and data scientists wrapped around the product team. The product team was typically made of some product manager. In doing that work, we use a number of tools like Dovetail to set up interviews, talk to people using Shopify, understand their issues. And then an aggregate, being able to see patterns across multiple people over time. And like many AI tools, it doesn't forget. Every time you add something, you just keep accumulating knowledge. While I forget what I did last week or another colleague might come in who's new, they log into Dovetail and sure enough, all those insights, sentiments, little sentences that grab the heart of the customer, what really thrilled them or like really significant pain points. 

When we slowly over time accumulate that data set, literally hours and hours and hours of recordings, people who've tagged things in the recordings, and then the additional analytics that can come from those tags, we're getting a significant multiplier for every team to make better decisions.  

When we talk experimentation iteration, it's often quite an analytical process. I have a hypothesis. I want to see what's going on with my customers. I want to understand how to better run my direct-to-consumer model for efficiencies. How do I get better conversion rates for the dollars I'm spending? It's quite an economics mindset. However, I've been finding with that complexity, once you get tighter into the user story and you have one person who's representative, you then start to see how resonant is that emotional experience across multiple people's user journey. And that for many people deep in the data start to cultivate more empathy.   


Q: How do you utilize consumer feedback and data simultaneously to drive these cycles of product development and marketing strategies? 

Ryan: 10 years ago, I took a very different approach. I had far more of an MBA mindset. I was looking at the tools. I was seeing what was coming out in a dashboard. I was running data -driven experiments and making more data -driven decisions and thinking that, damn, am I ever smart? Right? Using Tableau or BI for Microsoft. It felt like having a superpower by having ability to crush through so much data so quickly.  

Five years ago, I was deeper into AI tooling. We had built a model to understand people and biometrics. We had mathematically come to similar understanding of biometrics and human body shape and fit preferences that took a PhD.  

I think there's a shift in the last 12 months where we are now highly confident that most of the time the AI can out predict us. It's going to hallucinate sometimes, but it keeps improving. That's beginning to be pretty consistent and true. Maybe 80 % of the time or 90 % of the time it's right. Sometimes it's really funny, but it's sure improving fast. And now it's shifting our role as the manager of the AI agents. 

I find for my friends and colleagues, their attention to bias, confirmation bias, recency bias, other forms of bias that shape the decisions, the why, are getting more and more sophisticated. We as humans, seeing these AI agents out hustle us, have to figure out an alternate way to compete. How do we compete and utilize consumer feedback to drive these cycles. It’s the ability to understand bias and what questions we’re asking to improve our judgement. 

The judgment matched to the prediction ability of AI is where we really get the superpower. 

James: There's something in the idea that almost brings the human role back to the creativity side of deciding what problems to solve, what things to go after. You've got this incredibly powerful AI capability that can help you get there and help you get there faster. I think it almost becomes a problem of choice. What problems are you going to chase? What are you going to focus on and why? Ultimately, that needs to be informed by that deep understanding of the customer.  

Ultimately that needs to be informed by that deep understanding of the customer to inform what problems you think are important. But then you've got this very powerful machine that can really do a lot for you, both in terms of synthesizing complex information and helping you synthesize ideas quickly. I think we do a lot of prototyping internally where actually large language models are really good at helping you distill your own thinking and organize your own thoughts.    


Q: I'd love to know whether you can discuss or think of any specific examples where iterative testing led to unexpected results or breakthroughs? 

Ryan: Going back to FTSY, we were quite early to running a full AI stack. We were trying to figure out how people would interact with the AI, like on this thread that we've been pulling on. They had a really hard time. There wasn't really a mental model for interacting with an AI. 

After they saw the initial results, and this happens as people use ChatGPT for the first time, then they go, wow. Now I have a model and I can repeat this model and dig on it. Getting that over that initial hump of using the AI tool as a user, as a business is quite significant. 

We've seen this in other use cases where your fan base is extraordinary. Then you really double down on like referral programs, like free trials from referrals. You just double down on that energy and not do the paid acquisition. All the organic acquisition has extra energy. 

I see many teams taking advantage of that with AI applications now. We enabled custom fits, which was not accessible to many people. If you were used to buying thousand or $5 ,000 shoes, you would get your shoes made. For people that had never bought a custom pair of shoes before, that was quite foreign that you would make it for me. And it would be at a reasonable price, like $200 to $500. 

Most of the people that had done something like that had, they were diabetic or they had some prescription from their orthopedist and they waited eight to 12 weeks to get their shoes. We had changed that whole user journey.  

At Shopify, we ran a number of media experiments. We used imagery as a baseline. If you had interacted with images on a website, that was zero. If you interacted with video on a website, we saw a 60 % increase in the conversion rate. No surprise there when you interact with a brand that's got some great landing page video and maybe a product with like a flyover or something. It really enhances the whole customer journey and the conversion rates higher. When we started to see the 3D and augmented reality data come through, it had an average of a 94 % conversion rate lift. Interesting. And I guess it's the point you made earlier of really the AI doesn't care what the actual answer is, but just following what really matters, you know, it might sometimes surprise you or it might not be what you expect. You know, we see, you know, similar stories when we look at, you know, large scale data sets around creative and we're looking to kind of help customers understand what characteristics and content have correlated with outcomes. 

James: The AI doesn't care what the actual answer is, but just following what really matters, it might sometimes surprise you, or it might not be what you expect. We see similar stories when we look at large scale data sets around creative, and we're looking to help customers understand what characteristics in content have correlated with outcomes.  

It's always a really enjoyable process each time we look at a new data set because you can find trends that are fairly consistent across the board when it comes to certain use cases, which seem to be kind of fairly underlying truths. But equally then, every brand has a unique set of guidelines, a unique story, a unique way of positioning, and sometimes, it might be a different set of characteristics that actually drive the most success for them.  


 

An incredible opportunity to learn more about AI, product development, and the importance of consumer feedback!

Make sure to watch the whole episode above or listen on Spotify.

You can subscribe here, the next episode lands in 2 weeks!