Frequently Asked Questions
Creative Effectiveness with Dragonfly AI
Table of Contents
-
About Dragonfly AI
-
Accuracy & Validation
-
Pricing & Commercial
-
Creative Testing & Methods
-
Heat Maps & Visual Analytics
-
Consumer Insights: Attention, Memory & Emotion
-
Use Cases & Applications
-
Integration & Technical
-
Getting Started
About Dragonfly AI
Dragonfly AI is a creative testing platform that predicts how real people will notice, remember, and feel about your creative content before you launch.
Unlike traditional attention-only tools, Dragonfly AI analyses three critical drivers of creative success: Attention (will people notice it?), Memory (will they remember it?), and Emotion (how will they feel about it?). This comprehensive approach helps businesses across ecommerce, marketing, shopper marketing, packaging, websites, apps, and retail media make data-driven creative decisions that improve commercial outcomes. Whether you're a global CPG brand, agency, or in-house team, Dragonfly AI provides fast, actionable insights that take creative from good to great—delivering results like +70% ecommerce conversions, +263% email ROI, and +60% in-store promotional sales.
The platform analyses static images, video frames, layouts, packaging, digital shelf displays, in-store environments, retail media, landing pages, out-of-home advertising, print, and social content. By understanding how creative performs before it goes live, teams reduce wasted spend, accelerate decision-making, and consistently deliver more effective campaigns.
Dragonfly AI is built for global CPG brands, agencies, and in-house marketing teams who need to maximise creative effectiveness at speed and scale.
We support professionals across multiple disciplines: e-commerce managers optimising product pages and digital shelf presence, marketing teams improving campaign performance, shopper marketing teams enhancing in-store and retail media execution, packaging designers ensuring shelf standout, UX and CRO specialists refining websites and apps, and agencies managing creative quality across omnichannel campaigns for multiple clients.
If you're responsible for creative that needs to capture attention, be remembered, and drive the right emotional response, whether that's on a digital shelf, in a social feed, on a billboard, or in a retail environment, Dragonfly AI provides the predictive insights you need to make confident decisions faster.
Dragonfly AI's model is inspired by biological processes in the human brain, specifically how the visual system processes and prioritises information.
The platform's attention prediction is based on a biological model of how the brain works, not trained on specific examples that could introduce bias. This universal approach means predictions apply across any audience, market, or creative format. Memory and Emotion predictions use AI models trained on universal drivers and validated through extensive global primary research.
Dragonfly AI can analyse virtually any visual format: static images, individual video frames, page layouts, packaging designs (including front-of-pack and variants), digital shelf environments, in-store displays and planograms, retail media placements, landing pages, website designs, out-of-home advertising, print materials, and social media content. The analysis works across contexts—whether your creative appears in a crowded digital feed, on a retail shelf, or as a standalone advertisement.
Dragonfly AI provides comprehensive creative effectiveness analysis through three validated drivers: Attention, Memory, and Emotion, with Persuasion and Comprehension coming soon to the platform.
Studio 3.0 delivers breakthrough speed and intelligence: analyses complete in approximately 45 seconds, Copilot provides AI-powered recommendations for optimization, unlimited parallel testing lets you evaluate multiple variants simultaneously, and clear, actionable guidance helps you move from insights to improvements quickly.
Core Capabilities by Driver:
- Attention analysis includes Visibility (will people notice it?), Clarity (is the visual hierarchy effective?), and Digestibility (can viewers process the information efficiently?)
- Memory analysis evaluates Image memorability, Brand recall strength, and Copy retention across your creative
- Emotion analysis measures both Image and Copy emotional intensity and sentiment scale (positive to negative)
Dragonfly Mobile extends these capabilities into the real world, measuring attention, emotion, and memory in actual in-store and on-trade environments. This bridges the gap between predicted and real-world performance, helping retail and shopper marketing teams validate and optimise shelf execution.
Dragonfly AI uses different approaches for different metrics, each optimised for accuracy and reliability.
For Attention prediction, we don't use traditional machine learning. Instead, the AI is a biological model of how the brain works—specifically how the visual cortex processes and prioritises information. The major advantage of this approach is that the algorithm isn't biased to specific examples used to train a model. It applies universally across audiences and contexts because it's based on fundamental biological processes.
For Memory and Emotion, we use AI models trained on universal drivers of what makes content memorable and emotionally impactful. These models have been validated through extensive global primary research involving over 120,000 responses and 600,000 diagnostics. This ensures predictions are grounded in real human responses, not theoretical constructs.
The combination of biological modelling and validated AI training means Dragonfly AI provides accurate, unbiased predictions that work across diverse audiences, markets, and creative formats.
The algorithm models brain processes that are universal to everyone, which means predictions apply to any audience you may be targeting.
Rather than relying on demographic segmentation, Dragonfly AI's biological attention model is based on how the human visual system works—processes that function consistently across age groups, cultures, and geographies. This universality is a strength: the fundamental mechanisms of visual attention, what makes content memorable, and core emotional cues (like facial expressions) operate similarly across populations.
That said, we acknowledge that personal relevance matters. An image that resonates emotionally with one audience may not land the same way with another due to cultural context or individual preferences. The platform captures universal visual and emotional cues while recognising that message relevance and cultural nuances play a role in final effectiveness.
Language Support: Memory and Emotion analysis supports all alphabet-based languages. Current limitations include logographic scripts (such as Chinese or Japanese characters), very low-volume languages, and some regional dialects.
Accuracy & Validation
Dragonfly AI's predictions are independently validated and consistently outperform traditional research methods in both accuracy and efficiency.
Attention prediction achieves 89% accuracy versus professional-grade eye-tracking, equivalent to conducting an eye-tracking study with 39 participants under controlled lab conditions. Memory prediction achieves 80% accuracy versus human testing, equivalent to surveying 34 people. Emotion prediction achieves 83% accuracy versus human testing, equivalent to gathering responses from 36 participants.
For example, if you were to run a traditional eye-tracking study with 39 participants in a lab setting, Dragonfly AI can predict those results to 89% accuracy—delivered in approximately 45 seconds instead of weeks, at a fraction of the cost.
These accuracy rates are validated through rigorous independent testing, including verification against MIT's saliency benchmarks, and ongoing primary research involving hundreds of thousands of real human responses. Our validation is transparent, peer-reviewed, and continuously updated to ensure reliability.
Over a decade of research in collaboration with Queen Mary University of London has shaped Dragonfly AI's patented biological algorithm.
Our solution is independently validated, patent-backed in the EU, US, and UK, and supported by multiple peer-reviewed papers in scientific journals. The attention prediction model's accuracy has been verified against MIT's independent saliency benchmarks (MIT300 and CAT2000), which are widely recognised standards in computer vision and visual attention research.
This isn't proprietary black-box technology—our approach is grounded in neuroscience, computer vision research, and extensive empirical validation. The ongoing research partnership with Queen Mary University of London means we continuously refine and improve our models based on the latest scientific understanding of how humans process visual information.
For more information on the MIT300 and CAT2000 benchmarks, visit the MIT Saliency Benchmark website. Learn more about our ongoing research partnership.
Predictive metrics complement traditional methods by delivering speed, scale, and cost-efficiency without sacrificing accuracy.
Traditional eye-tracking and consumer testing provide valuable data, but they're expensive (often £10,000–50,000+ per study), time-consuming (weeks to months), and limited in scale (testing multiple variants becomes prohibitively costly). Predictive tools like Dragonfly AI deliver comparable accuracy in seconds, at a fraction of the cost, enabling teams to test dozens or hundreds of creative variations before committing budget to production and media spend.
When to use predictive: Early-stage creative development, rapid iteration, testing multiple variants, pre-flight optimisation, maintaining quality at scale across markets and channels.
When to use traditional: Final validation of high-stakes campaigns, gathering detailed qualitative feedback, measuring downstream behaviours (purchase, recall over time), testing highly novel or culturally sensitive concepts where human nuance is critical.
The best approach often combines both: use predictive insights to identify strong candidates quickly, then validate finalists with traditional research if the business case warrants it. Transparency about limitations is important—predictive tools work best when creative quality is high and the context is well-defined.
Learn how to test creative content in 30 seconds instead of 30 days
Dragonfly AI's attention prediction matches or exceeds traditional eye-tracking accuracy while delivering results in seconds instead of weeks.
Professional eye-tracking studies typically involve recruiting participants, setting up lab equipment or mobile eye-tracking devices, conducting sessions, and analysing gaze data—a process that takes weeks and costs thousands to tens of thousands of pounds. A typical study might involve 30–50 participants to achieve statistical reliability.
Dragonfly AI achieves 89% accuracy versus professional-grade eye-tracking, equivalent to a 39-participant study, delivered in approximately 45 seconds. This means you get the predictive power of a substantial eye-tracking study almost instantly, enabling rapid iteration and testing at scale that would be impossible with traditional methods.
The key advantage isn't just speed and cost—it's the ability to test every creative variant, every market adaptation, and every format iteration before launch. Where traditional eye-tracking might inform 1-2 major creative decisions per quarter, predictive analysis can inform hundreds of optimisations across your entire creative output.
Traditional eye-tracking studies typically cost between £10,000–£50,000+ per project, while AI prediction tools operate on subscription licensing that enables unlimited testing.
A single eye-tracking study involves participant recruitment (£50–150 per person), lab time or mobile eye-tracking equipment, researcher fees, and analysis time. For a modest 30-participant study testing 3-5 creative variants, you're looking at £15,000–30,000 and 3-6 weeks turnaround. Larger studies or those requiring specific demographic targeting cost significantly more.
AI prediction platforms like Dragonfly AI use annual licensing priced by team size and use case, enabling teams to run unlimited tests across hundreds or thousands of creative assets for a fraction of a single eye-tracking study's cost. The economic model shifts from "can we afford to test this?" to "let's test everything before we launch."
Practical trade-offs to consider:
- Eye-tracking: High per-test cost, limited scale, weeks turnaround, rich qualitative data
- AI prediction: Low per-test cost, unlimited scale, seconds turnaround, quantitative metrics
For most teams, the optimal strategy is using AI prediction for the majority of creative decisions while reserving traditional research for high-stakes final validation or gathering deeper qualitative insights.
Predictive tools are powerful but not infallible. Understanding their limitations helps teams use them appropriately and avoid over-reliance on any single metric.
Context limitations: Predictive tools analyse creative in isolation or simulated contexts. They don't account for adjacent content in a feed, the user's mindset or task, or real-world environmental factors (like viewing distance, screen quality, or ambient lighting). A design that scores well in isolation may still underperform if the media environment is cluttered or the audience isn't receptive.
Creative quality floor: Prediction works best on competent creative. If fundamental design principles are missing (illegible text, confusing layouts, poor image quality), predictions may be accurate but the creative is simply not good enough to succeed regardless of predicted scores.
Novel or culturally specific content: While the biological model is universal, highly novel creative formats or culturally specific references may not align perfectly with training data. The model is most reliable for established content types and visual patterns.
Downstream behaviour: Predictive metrics tell you whether people will notice, remember, and emotionally respond to creative—but not whether they'll click, buy, or advocate. These are correlated but not deterministic. Other factors (offer, product-market fit, price, competitive context) drive final outcomes.
Appropriate use: Use predictive insights for pre-flight optimisation, variant selection, and quality assurance. Validate high-stakes decisions with market testing. Combine attention, memory, and emotion metrics with business context and creative judgment.
Pricing & Commercial
Dragonfly AI operates on annual licensing with pricing tailored to team size and use case.
We don't publish standard pricing because every customer's needs are different—a single-brand ecommerce team has different requirements than a global agency managing dozens of clients, or an enterprise CPG company coordinating creative across multiple markets and channels.
Licensing is structured to provide unlimited testing within your subscription, shifting the economic model from "can we afford to test this variant?" to "let's optimise everything before launch." Pricing factors include number of users, volume and types of creative assets, specific capabilities needed (e.g., packaging analysis, retail media, enterprise integrations), and support level.
To get a tailored quote, book a demo where we'll assess your needs and provide transparent pricing based on your specific situation. Most customers find the investment pays for itself many times over through reduced wasted spend, faster time-to-market, and improved creative performance.
We offer comprehensive product demos where you can see Dragonfly AI analyse your own creative in real-time.
Rather than a self-service trial, we've found that guided demos deliver more value: you bring your actual creative challenges, we analyse real examples from your business, and you see immediate insights relevant to your work. This approach helps teams understand not just how the platform works, but how it fits into their specific workflows and decision-making processes.
What a demo includes:
- Live analysis of your creative assets (bring 2-4 examples)
- Walkthrough of attention, memory, and emotion insights
- Discussion of how Dragonfly AI fits your use case (ecommerce, packaging, advertising, retail media, etc.)
- Overview of Studio 3.0 features including Copilot recommendations
- Transparent pricing conversation based on your needs
- Q&A with a product specialist
What to bring: Come prepared with specific creative challenges you're facing, examples of assets you'd like to optimise, and questions about how predictive insights could improve your workflow. The more context you share, the more valuable the demo.
Dragonfly AI offers flexible licensing for agencies managing multiple clients and in-house teams coordinating across departments.
For agencies: Multi-client workspace structure allows you to maintain separate projects for each client with appropriate governance and permissions. You control who sees what, maintain client confidentiality, and can easily onboard or offboard clients as engagements begin and end. Single sign-on (SSO) integration streamlines access management across your team.
Roles and permissions: Assign different access levels based on responsibility—creative leads get full editing and testing capabilities, account managers get view-only access to share reports with clients, and administrators manage users and workspace settings.
Licensing tiers accommodate everything from boutique agencies with a handful of clients to global networks managing hundreds of brands across markets. Pricing scales with the size of your team and the volume of creative you're optimising, with governance features ensuring quality and efficiency at scale.
The platform becomes your quality assurance and optimisation layer across all client work—briefing creative with clear targets, testing concepts before presenting to clients, and optimising campaigns before launch across omnichannel touchpoints.
Dragonfly AI customers see the platform pay for itself through three primary value drivers: improved creative performance, reduced wasted spend, and faster time-to-market.
Direct performance uplift: Validated case studies show significant improvements in real-world KPIs:
- +263% email ROI (telecom client optimising email creative)
- +70% ecommerce conversions (brand optimising product page design)
- -87% design testing cost (enterprise client replacing traditional research)
- +60% in-store promotional sales (CPG brand optimising shelf execution)
Reduced wasted spend: The average cost of a failed campaign includes creative production, media spend, and opportunity cost. If predictive insights prevent just one underperforming campaign per year—or improve performance enough to justify the media investment—the ROI is clear. Most customers test hundreds of assets annually, compounding the value.
Speed and efficiency: Replacing weeks-long research processes with 45-second analyses means faster iteration, more variants tested, and quicker market response. Teams ship better creative, faster, with confidence backed by validated predictions.
The question isn't whether you can afford Dragonfly AI—it's whether you can afford not to optimise creative before committing budget to production and media spend.
Creative Testing & Methods
Creative testing is the process of evaluating advertising and marketing assets before launch to predict and improve their effectiveness.
Traditional creative testing involves showing concepts to target audiences and gathering feedback through surveys, focus groups, or behavioural observation (like eye-tracking). The goal is answering critical questions: Will people notice this? Will they remember it? How will they feel? Does it persuade? Do they understand the message?
Pre-testing matters because creative quality is the largest driver of campaign effectiveness—research shows creative accounts for 50-70% of campaign ROI variance, far more than media targeting or budget allocation. Yet many teams still launch creative based on subjective judgment or limited feedback, only discovering what works (or doesn't) after spending substantial media budgets.
Common approaches to creative testing include eye-tracking studies to measure attention, recall surveys to measure memory, emotional response testing to gauge sentiment, A/B testing to compare variants in market, and increasingly, predictive AI tools that simulate these human responses at speed and scale.
The most sophisticated teams use a combination: predictive tools for rapid iteration and pre-flight optimisation, followed by selective market testing to validate high-stakes decisions.
Pre-testing ad creatives using predictive insights enables rapid iteration and confident decision-making before committing to production and media spend.
Step-by-step process:
- Prepare your creative variants: Develop 2-4 alternative concepts or variations of your creative (different headlines, visual hierarchies, colour palettes, or layouts). Ensure they're in near-final format—the more representative of final creative, the more actionable the insights.
- Upload and analyse: Use Dragonfly AI to test each variant simultaneously (approximately 45 seconds per analysis). The platform generates attention heatmaps showing what draws the eye, memory scores indicating recall strength for brand and copy elements, and emotion analysis revealing sentiment and intensity.
- Compare and interpret: Review the results side-by-side. Look for clear winners on key metrics: Does one variant achieve better clarity and visibility for your call-to-action? Does another deliver stronger brand memory? Which emotional response best matches your campaign goals?
- Refine and retest: Use Copilot recommendations to identify specific improvements. Make targeted edits—adjust visual hierarchy, simplify copy, strengthen brand cues—then retest to validate improvements.
- Validate and launch: If the stakes warrant it, validate your top-performing creative with a small market test before full launch. If predictive insights are strong and you've tested similar content successfully before, proceed with confidence.
This approach compresses weeks of traditional research into days or hours, enabling teams to test more variants, iterate faster, and launch creative proven to perform.
Learn to test creative content in 30 seconds instead of 30 days
Maintaining creative consistency and effectiveness across multiple channels requires systematic testing and quality assurance at scale.
Consistency across placements: Test how the same core creative performs in different contexts—social feed vs billboard vs retail display. Attention patterns shift dramatically based on viewing distance, screen size, and competitive clutter. A design that works for Instagram may fail on a digital shelf where brand standout matters more than aesthetic polish.
Versioning at scale: As campaigns adapt across markets, languages, and channels, small changes compound. A colour shift that improves attention in one market might reduce brand recognition in another. Test every significant variation—not just major creative changes but also headline translations, aspect ratio crops, and platform-specific adaptations.
Establish quality gates: Define minimum acceptable scores for attention clarity, brand memory, and emotional appropriateness before creative moves to production. This creates objective standards that replace subjective debate, enabling faster approval workflows.
Test early and often: Don't wait until creative is "final" to test. Evaluate concepts at the rough stage, iterate based on insights, test again at near-final, and validate one more time if substantial changes were made. Each round of testing costs seconds, not weeks.
Document learnings: Build an internal library of what works. If bold colours consistently improve shelf standout but reduce comprehension, that's a strategic trade-off your team can make consciously rather than discovering through expensive market failure.
These terms are often used interchangeably but describe different testing approaches with distinct purposes, timelines, and methods.
Creative testing is pre-launch evaluation of advertising concepts and assets to predict effectiveness before committing budget. It happens during the development phase, uses predictive tools or small research samples, and aims to identify which creative directions will perform best. Think of it as quality assurance before production.
Ad testing is broader—it can mean pre-launch creative evaluation or post-launch market testing of full campaigns. It typically evaluates finished ads in simulated or real environments and measures both creative quality and contextual fit (does this ad work in this channel, for this audience, right now?).
A/B testing is live market testing where two or more variants run simultaneously, and performance metrics (clicks, conversions, sales) determine the winner. It happens post-launch, requires real traffic or media spend, and measures actual behaviour rather than predicted response. A/B tests answer "which performs better in market?" while creative testing answers "which should we put into market?"
Complementary roles: The most sophisticated approach uses creative testing to identify strong candidates before production, ad testing to validate finished creative in relevant contexts, and A/B testing to optimise in-market performance. Each stage serves a purpose—eliminate weak options early (creative testing), validate strong options before full launch (ad testing), and fine-tune live campaigns (A/B testing).
The most important creative KPIs depend on campaign objectives and where in the funnel you're measuring effectiveness.
Attention metrics (visibility, clarity) drive top-of-funnel performance—will people notice the creative in context? This correlates with impressions that actually register, view-through rates, and initial engagement. If attention is weak, nothing else matters.
Memory metrics (brand recall, message retention) predict middle-funnel impact—will people remember your brand and key message after exposure? This correlates with aided and unaided recall, brand lift, and the effectiveness of reach-based campaigns where you need one exposure to stick.
Emotion metrics (sentiment, intensity) influence both consideration and conversion—how people feel about your creative affects whether they trust, desire, or dismiss your offering. Positive emotion at appropriate intensity supports conversion rates, while misaligned emotion (e.g., anxiety in a celebratory context) can suppress performance despite strong attention.
Downstream KPIs like CTR (click-through rate), CVR (conversion rate), and ROAS (return on ad spend) are influenced by attention, memory, and emotion but also by offer, price, product-market fit, and competitive context. Strong creative metrics increase the probability of strong performance metrics, but they're not deterministic.
Balancing drivers: The most effective creative achieves strong performance across attention, memory, and emotion—not just one. Attention without memory wastes reach. Memory without emotion may achieve recall but not consideration. Emotion without attention never registers.
Learn about attention-optimised campaigns for consumer goods
Dragonfly AI measures creative effectiveness through a comprehensive score based on three validated drivers, each with specific sub-metrics.
Overall Creative Effectiveness Score combines Attention, Memory, and Emotion metrics weighted by their impact on campaign success. This provides a single headline number for comparing variants while retaining detailed diagnostics.
Attention metrics include:
- Visibility: The likelihood that people will notice the creative in a typical viewing context
- Clarity: How effectively the visual hierarchy guides attention to key elements (brand, offer, CTA)
- Digestibility: How efficiently viewers can process the information presented
Memory metrics include:
- Image memorability: How well visual elements will be retained
- Brand memory: The strength of brand cue recognition and recall
- Copy memory: How effectively headline and body copy will be remembered
Emotion metrics include:
- Image emotion: Emotional response triggered by visual elements, measured on intensity and sentiment scales (positive/negative)
- Copy emotion: Emotional response triggered by written content, also measured on intensity and sentiment
- Overall sentiment: Net emotional response ranging from negative to neutral to positive
Each driver provides total-level scores for the creative asset as well as breakdowns across text, branding, and visual elements. This granularity enables specific optimisations: if brand memory is weak but image memory is strong, strengthen brand assets. If copy emotion is misaligned with campaign goals, refine messaging.
Heat Maps & Visual Analytics
Heat maps visualise where people look, what captures attention, and how visual hierarchy directs focus across your creative or digital properties.
For predictive heat maps:
- Select your creative: Upload the asset you want to analyse—this could be a product page, social ad, packaging design, email template, or out-of-home creative. Ensure the format matches how it will actually appear (aspect ratio, resolution).
- Run the analysis: Predictive tools like Dragonfly AI generate attention heat maps in seconds, showing high-attention areas (typically rendered in warm colours like red and yellow) and low-attention areas (cool colours like blue or green).
- Interpret responsibly: Heat maps show probability of attention, not certainty. High-heat areas are more likely to be noticed first and longest. Use this to validate that your most important elements (brand, offer, CTA) are actually capturing attention—or to diagnose why performance may be weak.
- Make targeted improvements: If your call-to-action sits in a low-attention zone, increase its visual salience (size, contrast, colour, whitespace). If brand elements aren't noticed, strengthen them or move them to higher-attention locations.
- Compare variants: The most valuable use of heat maps is comparing alternatives. Does design A or design B deliver better clarity and focus for your key message? Heat maps make this visually obvious.
Important caveat: Heat maps show attention distribution but not whether the attention is positive or productive. Combine heat maps with memory and emotion analysis to ensure attention is focused on the right elements for the right reasons.
Read the ultimate heatmap guide for smarter design decisions
Heat maps are visual representations of attention patterns used to diagnose and improve the effectiveness of marketing creative, websites, and user interfaces.
Common applications:
Marketing creative: Validate that key brand and message elements capture attention in crowded contexts. Test whether your product stands out on a digital shelf, whether social ads direct focus to the offer, or whether out-of-home advertising is legible at viewing distance.
Website and UX optimisation: Identify whether visitors notice calls-to-action, whether critical information above the fold gets attention, whether navigation elements are discoverable, and whether page layouts support task completion.
Ecommerce: Ensure product images dominate attention on product detail pages, assess whether key product benefits are noticed, test whether trust signals (reviews, guarantees) register, and optimise checkout flows to reduce friction.
Retail and packaging: Predict shelf standout before production, test whether front-of-pack messages are noticed at typical viewing distances, and evaluate how packaging variants perform in cluttered retail environments.
Pitfalls to avoid: Heat maps show what gets attention, not why or whether that attention is valuable. A bright colour might draw the eye but distract from the message. Always interpret heat maps in context with your objectives and combine with memory and emotion metrics for comprehensive insights.
Predictive and traditional heat maps serve different purposes and work best at different stages of the creative or design process.
Predictive heat maps (like those from Dragonfly AI) use biological models or AI to forecast where people will look before you launch. They're generated in seconds, cost-effectively test unlimited variants, work on any creative format, and enable pre-flight optimisation. Use predictive heat maps during creative development to identify and fix attention problems before committing to production.
Traditional heat maps (from tools like Hotjar, Microsoft Clarity, or eye-tracking studies) capture actual user behaviour post-launch. They show where real visitors actually looked, clicked, or scrolled on your live website or tested creative. They're slower (require traffic accumulation), more expensive (especially eye-tracking), but provide ground truth about real user behaviour in real contexts.
When predictive adds ROI:
- Testing multiple creative variants before selecting one for production
- Optimising assets where traditional testing would be too slow or expensive
- Maintaining quality at scale across markets, channels, and campaigns
- Pre-validating designs before A/B testing in market
When traditional is essential:
- Diagnosing performance issues on live properties (why is this page underperforming?)
- Validating assumptions about user behaviour in complex interactions
- Gathering rich qualitative data about user intent and frustration points
- Testing highly novel formats where predictive models have less training data
The most sophisticated teams use both: predictive insights to optimise before launch, traditional data to validate and refine post-launch.
Predictive heat maps identify and fix attention problems on product pages, category pages, and checkout flows before performance suffers.
Product detail pages (PDP): Ensure hero product images dominate attention above the fold. Validate that key benefits, specifications, and trust signals (reviews, guarantees, free shipping) are noticed. Test whether calls-to-action achieve sufficient visual salience without overwhelming the design. A 70% conversion uplift case study came from optimising PDP layouts to focus attention on product USPs and simplify decision-making.
Category and listing pages (PLP): Test whether your products stand out against competitor listings on digital shelves. Amazon, Walmart, and other marketplaces are increasingly crowded—predictive analysis shows whether your thumbnail images, titles, and pricing capture attention or get lost in clutter.
Above-the-fold optimisation: The most valuable real estate on any ecommerce page is what visitors see without scrolling. Heat maps reveal whether you're using this space effectively—is attention focused on conversion-driving elements or scattered across lower-priority information?
Checkout clarity: Predictive testing can identify friction points in checkout flows—are form fields clear, are trust badges noticed, are error messages visible? Reducing cognitive load and directing attention to next steps improves completion rates.
Visual hierarchy: Heat maps show whether your intended visual hierarchy (product → benefits → CTA) matches where attention actually flows. Misalignment suggests design changes to guide visitors through your intended path to purchase.
Heat maps help retail teams predict and optimise shelf standout, brand visibility, and promotional effectiveness before implementing planograms or printing point-of-sale materials.
Planogram testing: Before resetting shelves, test how different product arrangements perform. Predictive heat maps show which placements achieve better visibility for your brand versus competitors, whether shelf talkers or promotional flags enhance or clutter attention, and how viewing angle and distance affect legibility.
Shelf standout: Packaging that works in isolation may disappear on-shelf. Heat maps reveal whether your brand achieves differentiation in crowded categories—does your colour palette, logo size, or pack shape capture attention when surrounded by 20 competitors? Testing variants before production prevents costly errors.
Promotional materials: In-store signage, shelf strips, and end-cap displays compete for shopper attention. Predictive analysis shows whether promotional messages are noticed from typical sighting distances (often 3-5 metres in larger stores) and whether they direct attention to featured products or get ignored.
Packaging optimisation: Front-of-pack hierarchy matters enormously. Heat maps validate that brand name, product variant, and key benefits are noticed in the 1-2 seconds shoppers typically spend evaluating options. If secondary information (ingredients, certifications) dominates attention over brand and USP, the hierarchy needs adjustment.
Case impact: A CPG client achieved +60% in-store promotional sales by using predictive insights to redesign shelf execution—strengthening brand visibility, simplifying messaging, and ensuring promotional flags enhanced rather than competed with product packaging.
Consumer Insights: Attention, Memory & Emotion
Consumer insights are deep understandings of how target audiences think, feel, and behave that inform more effective marketing, product development, and business strategy.
Traditional sources of consumer insights include surveys and questionnaires capturing attitudes and preferences, focus groups and interviews gathering qualitative feedback, purchase behaviour and transaction data revealing what people actually buy, social listening and sentiment analysis showing what people say unprompted, and observational research (like ethnography or eye-tracking) showing how people interact with products and environments.
Predictive signals are an increasingly important source of insight. By analysing how audiences respond to visual content—where they focus attention, what they remember, how they feel—teams gain fast, scalable understanding of creative effectiveness patterns. These signals complement survey data and behavioural metrics, providing granular diagnostic information about why some creative works and other creative fails.
The most valuable consumer insights connect patterns across multiple sources: What audiences say (survey), what they do (behaviour), and what they'll respond to (predictive). This triangulation builds robust understanding that informs confident decisions.
Explore the role of consumer insights and analytics in modern advertising
Visual analytics transforms creative assets into structured insights about what drives attention, memory, and emotional response across your marketing.
The process starts with asset analysis: testing representative samples of your creative—ads, packaging, web pages, retail displays—to measure attention patterns, brand and message memory, and emotional responses. Each analysis generates detailed diagnostics about which visual and textual elements perform well and which underperform.
Pattern identification follows: Across dozens or hundreds of tests, patterns emerge. Perhaps bright colours consistently improve shelf standout but reduce comprehension. Maybe human faces drive emotion but sometimes distract from product benefits. Large logos might ensure brand recognition but crowd out messaging. These patterns become strategic principles.
Diagnosis and improvement close the loop: When a specific asset underperforms, visual analytics pinpoint why. Is the CTA not getting attention? Is brand memory weak despite strong overall visibility? Is emotional tone misaligned with campaign goals? Each diagnosis suggests specific optimisations to test.
The cumulative effect is a proprietary understanding of what creative approaches work for your brand, in your categories, with your audiences. This evolves from testing individual assets to building systematic creative intelligence that improves decision-making across teams.
Learn how to generate consumer insights through visual analytics
Over a decade of research has identified five essential drivers of creative success: Attention, Memorability, Comprehension, Persuasion, and Emotion.
Driver interdependence: These five drivers work like links in a chain, influencing one another. Emotive imagery often increases early attention—people notice faces, bold colours, unexpected elements. Sustained attention supports memory formation—the longer someone engages with creative, the more likely they'll remember key elements. Comprehension enables persuasion—if the message isn't clear, it can't convince. Emotion colours everything—positive sentiment enhances brand perception while negative emotion can suppress consideration.
Current state and roadmap: Dragonfly AI currently provides validated predictions for Attention (will people notice?), Memory (will they remember brand and message?), and Emotion (how will they feel?). Persuasion and Comprehension capabilities are coming soon, completing the full framework.
For CPG specifically: Shelf execution is critical—packaging must capture attention in 1-2 seconds against dozens of competitors. Brand memory at point of purchase drives consideration—shoppers who remember your brand from previous exposures are far more likely to choose you. Emotional appropriateness matters—the feeling your packaging or in-store creative evokes should align with consumption occasion and brand positioning.
Understanding how these drivers interact enables smarter creative strategy: Don't just optimise for attention if it sacrifices brand memory. Don't pursue emotional intensity if it creates wrong sentiment. Balance across drivers delivers better outcomes.
Emotion data reveals how creative makes people feel—a critical dimension that attention and memory metrics alone can't capture.
Beyond recall to resonance: A viewer might notice your ad (high attention) and remember your brand (strong memory) but feel neutral or even negative about it. Emotion analysis shows whether you're creating positive associations that support consideration and preference, or generating indifference or aversion that suppress conversion despite strong awareness.
Sentiment and intensity: Dragonfly AI measures both emotional sentiment (positive, neutral, negative) and intensity (mild to strong). This distinction matters: Highly positive emotion can create memorable, share-worthy creative that builds brand love. Mild positive emotion provides reassurance without overwhelming. Negative emotion has limited appropriate uses (e.g., highlighting problems before offering solutions) but is usually counterproductive.
Image vs copy emotion: Visuals and words trigger different emotional responses. An image might evoke warmth and nostalgia while copy communicates urgency or excitement. Analysing these separately helps teams optimise each element rather than averaging across the entire creative.
Category and context considerations: Appropriate emotion varies by category and campaign goal. A premium beauty brand might aim for aspirational emotion, while a value retailer emphasises satisfaction and smart choices. Seasonal campaigns lean into relevant emotions (joy for holidays, relief for back-to-school). Emotion data ensures your creative lands where you intend.
Learn about effective emotional resonance in CPG advertising
Dragonfly AI's three core drivers provide complementary insights into different dimensions of creative effectiveness.
Attention measures the likelihood that your creative will be noticed in context. This includes visibility (will people see it at all?), clarity (does visual hierarchy guide focus to key elements?), and digestibility (can they process the information efficiently?). High attention doesn't just mean "lots of things to look at"—it means the right elements get noticed in the right order at the right intensity. Think of attention as answering: "Will this creative register with my audience or will it be ignored?"
Memory measures the likelihood that brand, message, and visual elements will be stored and recalled later. This breaks down into image memorability (how distinctive and memorable are the visuals?), brand memory (how strongly will people remember your brand?), and copy memory (will the headline or key message stick?). Memory is especially critical for reach-based campaigns where you need single exposures to drive subsequent consideration. Think of memory as answering: "Will people remember this when it matters?"
Emotion measures the intensity and sentiment of emotional response to both visual and textual elements. This includes how strongly the creative triggers emotion (mild to intense) and whether that emotion is positive, neutral, or negative. Emotion is measured separately for images and copy, then aggregated into overall sentiment. Emotion doesn't just influence brand perception—it affects attention (emotional content often gets noticed) and memory (emotional experiences are more memorable). Think of emotion as answering: "How will people feel about this?"
Viewing across elements: Each driver provides total-level scores for the overall creative but also breakdowns across text, branding, and visual elements for memory, and text and visuals for emotion. This granularity enables precise optimisation.
Use Cases & Applications
Shopper marketing focuses on influencing purchase decisions at the point of sale—whether in physical retail stores, online marketplaces, or anywhere shoppers make buying choices.
Unlike traditional brand marketing that builds awareness and consideration over time, shopper marketing operates in the critical moment when someone is actively evaluating options and ready to buy. This includes in-store displays and promotions, shelf positioning and packaging, point-of-sale materials, retail media (digital ads on retailer sites and apps), and ecommerce product page optimisation.
Shopper marketing is important because it directly impacts conversion at the moment of highest intent. A shopper who remembers your brand from advertising but can't find your product on shelf won't buy. A shopper who clicks your retail media ad but encounters a cluttered product page won't convert. The last mile of marketing—ensuring your brand stands out and communicates value right when purchase decisions happen—often determines whether all your upstream investment pays off.
Key levers: Visibility and standout (does your brand get noticed against competitors?), clarity of value proposition (do shoppers quickly understand why to choose you?), and emotional appropriateness (does in-store presentation align with brand promise?).
Brand and retail collaboration: The most effective shopper marketing aligns brand objectives with retailer goals—driving category growth, improving margin mix, or increasing basket size. Predictive insights help both sides optimise execution before committing to expensive resets or promotional periods.
Visual cues direct attention, simplify decision-making, and trigger emotional responses that influence what shoppers notice, consider, and ultimately purchase.
Salient cues (high-contrast colours, large brand logos, human faces, directional elements like arrows) naturally capture attention. In crowded retail environments—whether a physical shelf or a digital category page—these cues determine whether your product gets considered at all. If your packaging blends into the shelf or your ecommerce thumbnail looks like everyone else's, you've lost before evaluation even begins.
Brand assets (distinctive colours, logos, characters, taglines) trigger recognition and recall. Shoppers who've seen your advertising or purchased your products previously will find you faster if brand cues are consistent and prominent. Weak or inconsistent brand presence forces shoppers to work harder, increasing the chance they'll choose a more visually obvious alternative.
Hierarchy and clarity: Both in-store and online, shoppers make decisions quickly—often 1-3 seconds per product evaluation. Visual hierarchy that clearly communicates brand → product type → key benefit simplifies this process. Cluttered packaging or poorly organised product pages slow decision-making and increase abandonment.
Example: A CPG brand redesigned promotional shelf strips to increase visual salience while maintaining brand consistency. Predictive testing ensured the promotional message didn't overpower brand recognition. Result: +60% promotional sales uplift because shoppers both noticed the offer and remembered the brand.
Ecommerce teams use Dragonfly AI to optimise every visual touchpoint in the customer journey—from homepage to product pages to checkout.
Above-the-fold optimisation: Test whether hero images, value propositions, and calls-to-action achieve sufficient attention and clarity. Predictive heat maps show whether visitors' eyes naturally flow through your intended hierarchy or scatter across competing elements. Validated case studies show 20-70% conversion uplifts from improving above-the-fold layouts to focus attention on conversion-driving content.
Product page hierarchy: Ensure product images dominate attention, key benefits and specifications are noticed without scrolling, trust signals (reviews, guarantees, free shipping) register appropriately, and calls-to-action achieve visual salience without overwhelming the design. Many product pages suffer from "competing priorities syndrome"—every element screams for attention, resulting in cognitive overload and abandonment.
Copy blocks and messaging: It's not enough for copy to be present—it must be noticed and remembered. Dragonfly AI's memory metrics show whether your product benefits, USPs, and reassurance copy will actually stick with visitors or get ignored entirely.
Product images and thumbnails: Test whether your product photography stands out on category pages (especially marketplaces where you compete with dozens of sellers) and whether lifestyle images enhance or distract from product comprehension. The emotional response to imagery affects brand perception and desire.
Iterative improvement: The fastest path to higher conversions is testing multiple variants of key pages, identifying clear winners, implementing changes, then repeating the cycle. Dragonfly AI's 45-second analysis time enables rapid iteration impossible with traditional A/B testing alone.
Specific visual elements consistently influence conversion rates when optimised for attention, memory, and emotional impact.
Product imagery: High-quality images that showcase the product clearly and attractively are foundational. But quality isn't enough—images must dominate attention above the fold and communicate key product attributes visually. Lifestyle images that show the product in use can drive emotional connection but shouldn't overwhelm core product shots.
Calls-to-action (CTAs): "Add to Cart" or "Buy Now" buttons must achieve sufficient visual salience through size, colour contrast, whitespace, and positioning. CTAs that blend into the page or hide below the fold sacrifice conversions. However, oversized or overly aggressive CTAs can reduce trust and sophistication perception.
Trust cues: Reviews, ratings, security badges, return policies, and social proof all influence purchase confidence—but only if noticed. Predictive analysis shows whether these elements register or get lost in clutter.
Concise benefits: Shoppers need to quickly understand why your product solves their need. Bullet points, headlines, or annotated product images that communicate key benefits must be visible and memorable. Dense paragraphs often get ignored entirely.
Visual hierarchy: The order in which elements get attention matters. Ideal hierarchy: product image → headline/benefit → price → CTA → supporting detail. If attention scatters or focuses on low-priority elements, conversion suffers regardless of how good individual components are.
Category-specific considerations: Fashion and lifestyle categories lean more heavily on aspirational imagery and emotional resonance. Electronics and appliances require clear specification communication. Each category has nuances, but the underlying principle is the same: optimise for attention, memory, and appropriate emotion.
Learn ecommerce engagement strategies with methods and examples
Retail and brand teams use Dragonfly AI to optimise packaging design and shelf execution before production, avoiding costly errors and ensuring products stand out in competitive retail environments.
Front-of-pack clarity: Test whether brand name, product variant, and key benefits are noticed in the 1-2 seconds shoppers spend evaluating options. If secondary information (ingredients, certifications, legal copy) dominates attention over brand and USP, the hierarchy needs adjustment before printing thousands of units.
Variant navigation: For brands with multiple SKUs, packaging must enable quick differentiation—can shoppers tell vanilla from chocolate, regular from organic, original from new formula at a glance? Predictive testing shows whether colour coding, imagery, or typography achieves sufficient distinction or creates confusion.
Shelf-read distance: Packaging that's legible close-up may become an indistinct blob from 3-5 metres away—typical sighting distances in supermarkets. Test shelf standout at relevant viewing distances to ensure brand and category cues remain clear.
Placement on shelf: Simulate how your packaging performs at different shelf positions—eye level versus bottom shelf, surrounded by competitors versus in isolation. Predictive heat maps show whether your brand achieves differentiation in realistic retail contexts, not just styled photography.
Dragonfly Mobile capability: The mobile app enables in-store testing of actual shelf execution, measuring attention, emotion, and memory in real retail environments. This bridges predictive insights with ground truth, helping teams validate assumptions and refine execution standards.
Speed and cost savings: Traditional in-store testing requires printing samples, shelf resets, and observational research—weeks and tens of thousands of pounds. Predictive testing delivers comparable insights in seconds, enabling iteration before committing to production.
Testing ads with Dragonfly AI reduces wasted spend and increases sales by optimising ad performance before launch.
Ad testing begins during creative development: Upload concepts or near-final creative to Dragonfly AI and analyse attention patterns (will key elements get noticed?), memory scores (will brand and message be remembered?), and emotional response (does it create the right feeling?). Results appear in approximately 45 seconds.
Compare variants side-by-side to identify clear winners. Look for ads that achieve strong visibility for your brand and offer, deliver memorable brand cues and messaging, and generate appropriate emotional response for your campaign goals (aspirational, reassuring, exciting, etc.).
Use Copilot recommendations to refine underperforming elements: strengthen brand assets if memory is weak, improve visual hierarchy if key messages aren't getting attention, or adjust imagery if emotional tone is misaligned.
Test across formats and channels: An ad that works in social feeds may need adjustments for out-of-home where viewing time and distance differ. Test how creative performs in simulated contexts relevant to your media plan.
Before committing media budget: Validate that your creative achieves minimum effectiveness thresholds. A 5-10% improvement in creative effectiveness can translate to 50-100%+ improvements in campaign ROI once media spend is factored in. Testing before launch ensures you're amplifying strong creative, not wasting budget on underperforming assets.
Agencies use Dragonfly AI to maintain creative quality and effectiveness across clients, channels, and markets while accelerating workflows and reducing revision cycles.
Briefing and concept development: Start campaigns with clear creative effectiveness targets—minimum attention clarity scores, brand memory thresholds, emotional tone requirements. This replaces subjective debate with objective standards, aligning clients and creative teams around measurable goals.
Creative QA and optimisation: Test concepts and near-final creative before presenting to clients or launching campaigns. Identify and fix weaknesses—unclear hierarchy, weak brand presence, misaligned emotion—before clients see the work or budget is committed. This reduces revision cycles and demonstrates strategic rigor.
Omnichannel consistency: As campaigns adapt across social, display, OOH, retail media, and in-store, test every significant variation. Ensure brand consistency while optimising for each context's unique requirements (viewing distance, competitive clutter, channel conventions).
Multi-client management: Workspace structures allow agencies to maintain separate projects for each client with appropriate governance. Store creative benchmarks, best practices, and learnings for each client relationship, building proprietary insight that makes you indispensable.
Client reporting and education: Share predictions and optimisation recommendations with clients as evidence of strategic value. Position the agency as not just creators but validators of creative effectiveness, reducing subjective approval friction and accelerating decision-making.
Partnership with Creative X: While Dragonfly AI ensures creative is effective (will it capture attention, be remembered, evoke the right emotion?), platforms like Creative X ensure creative is platform-ready (does it meet technical specs, brand guidelines, channel best practices?). Used together, these capabilities provide comprehensive creative quality assurance.
Integration & Technical
Dragonfly Connect is the Creative Data API that integrates Dragonfly AI's predictive insights into your existing creative workflows, tools, and business intelligence systems.
Rather than working in isolation, Dragonfly Connect enables automated creative testing at scale. Common integration patterns include: digital asset management systems (DAMs) automatically routing new creative through Dragonfly AI for analysis before approval, demand-side platforms (DSPs) scoring creative effectiveness before campaign launch, creative tools and workflows embedding predictions directly in design environments, and business intelligence dashboards surfacing creative performance alongside campaign metrics.
Sample use cases: A global brand might integrate Dragonfly Connect into their asset approval workflow—every creative submitted for a campaign is automatically tested and scored, with only assets meeting minimum thresholds moving to production. An agency might connect predictions to project management tools, surfacing creative effectiveness data alongside timelines and budgets. An ecommerce team might pipe product page designs through Dragonfly AI automatically whenever layouts are updated.
The goal is embedding creative effectiveness as a standard quality gate rather than an occasional check. When predictions are automatically available wherever creative decisions happen, teams make better choices by default.
Dragonfly AI integrates with the tools where creative is managed, approved, and launched—digital asset management systems, demand-side platforms, creative development tools, and business intelligence dashboards.
DAM integrations: Connect to platforms like Bynder, Brandfolder, or Adobe Experience Manager so that creative assets automatically receive effectiveness scores as they're uploaded. This enables governance—only creative meeting effectiveness thresholds moves to production—while maintaining existing approval workflows.
DSP and ad platform connections: Integrate with demand-side platforms and ad networks to score creative before campaigns launch. This prevents wasted spend on underperforming creative and enables automated variant selection (launch the ad with highest predicted effectiveness).
Creative tools: Embed Dragonfly AI insights directly into design environments like Adobe Creative Cloud or Figma so designers see effectiveness predictions as they work, enabling real-time optimisation.
BI and reporting: Surface creative effectiveness data in business intelligence dashboards alongside campaign performance metrics. Correlate predicted attention, memory, and emotion scores with actual CTR, conversion, and ROAS to validate predictions and inform future creative strategy.
Creative X partnership: Creative X ensures ads are platform-ready by checking technical specifications, brand guideline compliance, and channel best practices. Dragonfly AI ensures ads are effective by predicting attention, memory, and emotional impact. Together, these platforms provide comprehensive creative quality assurance—Creative X answers "Is this ready to run?" while Dragonfly AI answers "Will this actually work?"
Dragonfly AI connects to product information management systems, content management systems, retail media platforms, and digital shelf analytics to optimise ecommerce and retail execution at scale.
PIM/CMS hooks: Integrate with product information management or content management systems so that product page designs, packaging updates, or category pages are automatically analysed when published or updated. This enables quality gates: only layouts meeting effectiveness standards go live.
Digital shelf analytics (DSA): Combine Dragonfly AI's predictive insights with digital shelf analytics platforms that monitor in-market performance. Correlate predicted shelf standout with actual conversion rates to validate which creative approaches drive results in specific categories and retailers.
Retail media: Connect to retail media platforms (Amazon Advertising, Walmart Connect, Instacart Ads) to score creative before launching campaigns on retailer properties. Given how competitive retail media has become, small improvements in creative effectiveness can meaningfully impact advertising efficiency.
PDP/PLP workflows: Integrate into product detail page and product listing page development workflows. As designs are iterated, Dragonfly AI automatically scores attention clarity, brand memory, and emotional appropriateness, helping teams optimise before A/B testing or launch.
These integrations shift creative optimisation from occasional project work to systematic quality assurance embedded in existing workflows.
Explore Dragonfly Connect for ecommerce and retail integrations
Yes, Dragonfly Mobile extends creative effectiveness measurement into real-world environments, specifically in-store and on-trade settings.
Dragonfly Mobile measures attention, emotion, and memory in actual retail environments, helping teams validate predicted performance against ground truth. This is especially valuable for packaging and shelf execution where context significantly affects performance—lighting conditions, viewing angles, competitive clutter, and shopper behaviour all influence whether creative works as intended.
Key capabilities: Test how packaging performs on actual retail shelves, validate whether promotional materials get noticed in real store environments, measure emotional response to in-store creative in context, and gather memory metrics from shoppers after in-store exposure.
Use cases: Validate planogram execution before rolling out to all stores, test shelf standout for new product launches in realistic conditions, assess promotional display effectiveness in actual trade environments, and bridge predictive insights with real-world validation for high-stakes decisions.
Dragonfly Mobile is available to Dragonfly AI customers. For detailed guides and tutorials on using the mobile app, visit theDragonfly Academy mobile resources.
Getting Started
Booking a demo with Dragonfly AI is straightforward and designed to show you how the platform works with your actual creative challenges.
Booking steps:
- Visit dragonflyai.co/demo and complete the brief contact form
- Select a time that works for your schedule (demos typically run 30-45 minutes)
- Receive calendar confirmation with prep guidelines
What to prepare: Bring 2-4 examples of creative assets you'd like to analyse during the demo—these could be ads you're currently running, packaging designs you're considering, product pages you want to optimise, or any other creative where you need effectiveness insights. The more relevant the examples, the more valuable the demo. Also prepare specific questions about your use case: How would this fit our workflow? How do we handle multiple markets? Can we integrate with our existing tools?
What happens next: A Dragonfly AI specialist will walk you through live analysis of your creative, show how attention, memory, and emotion insights work, discuss how the platform fits your specific use case (ecommerce, packaging, advertising, retail media), and provide transparent pricing based on your needs.
The goal is ensuring Dragonfly AI is genuinely the right fit for your team before moving forward.
A Dragonfly AI demo session is a consultative conversation where you see the platform analyse your actual creative in real-time while learning how it fits your specific workflows and challenges.
Typical agenda:
Introduction and context (5 minutes): You share your current creative testing process, key challenges (speed, cost, scale, objectivity), and what you're hoping to achieve with better creative effectiveness insights.
Live analysis of your assets (15-20 minutes): Upload 2-4 of your creative examples and watch Dragonfly AI generate attention heatmaps, memory scores, and emotion analysis in approximately 45 seconds per asset. The specialist walks through what each metric means, how to interpret results in your specific context, and what optimisations the insights suggest.
Platform capabilities walkthrough (10 minutes): See Studio 3.0 features including Copilot AI recommendations, unlimited parallel testing, how to compare variants, and how insights translate to actionable improvements.
Integration and workflow discussion (5-10 minutes): Discuss how Dragonfly AI would integrate into your existing processes—asset approval workflows, creative testing protocols, campaign development cycles, multi-market coordination, agency/client collaboration, etc.
Pricing and next steps (5 minutes): Transparent pricing conversation based on your team size, use case, and requirements. Clear explanation of what's included and next steps if you want to proceed.
Outcomes: You'll leave understanding whether Dragonfly AI solves your specific creative effectiveness challenges, what ROI to expect, and having seen it work on your actual assets.
Brands using Dragonfly AI consistently see measurable improvements in campaign performance, conversion rates, and speed to market alongside substantial cost savings.
Aggregate performance impacts:
- Ecommerce conversion: +20-70% in validated A/B tests after optimising product pages and digital shelf presence using attention and memory insights
- Email ROI: +263% for a telecommunications client who optimised email creative based on attention clarity and emotional resonance
- Design testing cost: -87% for an enterprise client who replaced traditional research processes with predictive analysis
- In-store promotional sales: +60% for a CPG brand that optimised shelf execution and point-of-sale materials
Speed improvements: Teams consistently report 10-20x faster creative iteration cycles. What previously took weeks (traditional research, revision, re-testing) now happens in days or hours. This velocity compounds—more variants tested means more learning, better creative standards, and continuously improving performance.
Risk reduction: By identifying and fixing creative weaknesses before launch, brands avoid wasted production costs and media spend on underperforming assets. For high-budget campaigns, preventing a single failure often justifies annual platform costs.
Strategic confidence: Perhaps the most valuable outcome is moving from subjective creative debate to objective, data-driven decisions. Teams make confident choices faster, reduce approval friction, and align stakeholders around what actually drives creative effectiveness.
Dragonfly AI continuously improves through ongoing research partnerships, regular model updates, back-testing against new data, and transparent validation.
Ongoing research partnership: Our collaboration with Queen Mary University of London continues to advance the science underlying the platform. As new research emerges about visual attention, memory formation, and emotional response, we incorporate validated findings into model refinements.
Regular model updates: We roll out improvements on a regular basis, each validated against benchmark datasets and real-world performance before deployment. Updates are transparent—we document what changed, why, and what impact customers should expect.
Back-testing and validation: Every model update is tested against historical data to ensure new versions maintain or improve accuracy without introducing regressions. We continuously validate predictions against new primary research (eye-tracking studies, memory tests, emotion surveys) to ensure the models stay calibrated.
Expanding capabilities: Current focus includes completing the five-driver framework (adding Persuasion and Comprehension to complement Attention, Memory, and Emotion), expanding language support for non-alphabet-based scripts, improving context simulation for different viewing environments, and deepening integrations with creative workflows.
Transparency: We maintain public documentation about our validation methodology, share accuracy benchmarks, and publish peer-reviewed research. This transparency enables customers to trust and verify platform performance rather than treating it as a black box.
Follow us on LinkedIn
And get your daily dose of
insights on creative intelligence!
Dragonfly AI
Creative intelligence and optimization