At Human Made Machine (HMM), we help you understand how effective your ads are with our creative pre-testing solutions. We define “good AI” as very human - our HMM AI predictions are driven by millions of real people who have seen ads in our media environments and participated in our brand surveys.
But keeping insights human-centered is a moving target. As AI becomes more accessible, its use among survey-takers has also increased - contributing to a rise in survey fraud. Promisingly, survey fraud - often overlooked by legal systems - is now being targeted by the U.S. Department of Justice. But from bots and click farms to AI-generated responses and low-quality human input, the threat of poor data quality is growing. For those of us who rely on true human input, it’s essential to understand how these risks can affect data quality - and what can be done to eliminate them.
In today’s blog, we focus on our commitment to maintaining high-quality, human-centered creative insights. That’s why having our robust quality control measures in place - to prevent, detect, and remove AI-generated and low-quality human responses - is essential.
Poor-quality partners lead to poor-quality data, so finding the best one matters. We work with the world’s largest global sample providers, as well as partners with specialist market coverage. They are the foundation of credible, high-quality insights.
We source and assess the reliability of these partners’ sample through screening and testing:
While no provider is entirely fraud-free, a trustworthy partner should clearly explain their quality standards and control methods. These early conversations help us assess how dependable they’ll be when we identify fraud, and how strong their existing processes are.
Still, real-world performance can vary. That’s why we test every partner for audience representation and run our own 20-step quality check to maximize accuracy.
At HMM, we assess the specific fraud and quality risks linked to different target audiences. Different groups show different fraud patterns. For example:
Understanding these nuances allows us to identify vulnerabilities early and embed the right checks into our surveys before we start recruiting the audience.
Survey questions can do more than collect insights - they can expose low-quality data.
We design our surveys to detect inconsistencies and inattention:
Strong back-end security is essential in survey research. We enforce the following controls:
There is no single check that will remove all fraud, but each one adds a layer of protection. We have tested the efficacy of each of these checks and our data shows, that in combination, they dramatically reduce AI-generated and poor quality survey responses.
*Speeders: Those who rush through without reading. Laggers: Those who pause for too long, possibly to look up answers. Spikes: Large spikes in completions during overnight hours, or sharp drops in participation often signal fraud.
Even the best sample provider will have inconsistencies. Not every survey-taker provides thoughtful, high-quality data. And relying on any single flag in isolation can cause over-cleaning, driving up costs and reducing true audience coverage.
After over 10 years of surveying millions of people, we’ve developed a proprietary algorithm that evaluates these quality signals simultaneously. Our system:
From knowing your audience to securing your systems and processes, protecting data quality is a full-spectrum effort.
Through our commitment to combating fraud, HMM deliver human-centered creative insights that our clients can trust.
Want to learn more about how to maximize the return on your ad spend with effective creative? Connect with us and request a demo.