The Op4G/Slice indictment sent shockwaves through the research industry.
Eight people were indicted.
A $10 million fraud scheme.
Fake survey takers using VPNs and coaching scripts, hiding in plain sight for almost a decade.
And here’s the uncomfortable truth: The fraud didn’t come from bots or AI.
It came from a real, U.S.-based panel provider that many clients trusted.
So, what happens when you have no choice but to use third-party online sample?
Because let’s face it, sometimes you do.
Start by Accepting the Risk
No sample provider, no matter how big or established is immune to fraud.
If the Slice case proved anything, it’s that fraud can operate inside legitimate companies, undetected for years.
So, the first step is to acknowledge this: Buying online sample always comes with risk.
Your job is to reduce that risk as much as possible and not pretend it doesn’t exist.
Questions to Ask Every Online Sample Provider
Before you launch, ask these questions directly and pay attention to how specific the answers are.
- How do you verify respondent identity at the point of entry?
- Real-time validation? Recontacted? Is there a human step anywhere?
- How often are panelist profiles revalidated?
- Once at signup or on a rolling basis? If it is on a rolling basis, what is the cadence?
- What fraud detection systems are in place and how do they work?
- What signals are used? Can they detect AI-generated responses?
- Do you use third-party partners to fulfill quotas?
- If yes, who are they? And how are they vetted?
- Can we see open-end responses in real time to spot potential red flags?
- If no, ask why not?
If your provider can’t answer these questions clearly (or gets defensive) you’ve already learned something important.
Add Your Own Layers of Protection
Even if you trust your online sample provider, don’t hand over quality control. It’s your responsibility to ensure the insights you are providing are based on real people.
Build your own systems around the sample.
Here’s what we do at PeopleMetrics when we must use third-party online sample providers:
- We review open-ended responses manually (early and often) … we look for long, unrealistic open ends especially that are a hallmark of AI generated content
- We look for language patterns and signs of AI generated answers
- We insert known logic traps to catch low-effort or scripted fraud
- We recontact a small % of respondents to verify they are who they say they are
- We cut bad data fast … even if it delays the project
Yes, it’s extra work.
But it’s worth it. Because bad data isn’t just waste, it’s unethical to provide recommendations to our clients that we are not sure are based on human responses.
Watch for These Red Flags in the Data
🚩 Open ends that are too clean, vague, long or overly agreeable
🚩 Unusual spikes in completion speed or drop-off
🚩 Suspiciously uniform device/browser data
🚩 Multiple respondents with similar phrasing or structure
🚩 Outliers that don’t show up in other data sources (like CX or Custom Research Panels)
If you see any of these, pause. Investigate. And be ready to cut.
It’s Not Just About the Data … It’s About Accountability
When you use third party online sample, you’re responsible for the integrity of that data, not the provider!
If the data’s wrong, your decisions are wrong.
And it’s your brand, your team, and your leadership on the line.
So be relentless about it.
Make quality visible.
Make fraud detection part of your process.
And don’t be afraid to walk away from a partner who can’t keep up.
Bottom Line
Sometimes, third-party sample is necessary.
But it should never be blind trust.
The Op4G/Slice case showed us that even well-known providers can become a liability.
And that fraud isn’t always loud, sometimes it blends in perfectly until it’s too late.
So, if you’re going to use online sample, use it with your eyes wide open.
Ask the hard questions.
Build your own safeguards.
And remember … you can outsource sample, but not responsibility!
Up next:
Post 8: The True Cost of Bad Data