에이전트 AI 시대에 사용자 연구 사기 최소화

hackernews | | 💼 비즈니스
#llm #tip #리서치 방법론 #사기 최소화 #사용자 연구 #에이전트 ai
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

Without the actual article text provided, I cannot create a meaningful summary. The title suggests the article addresses strategies for preventing fraudulent user research as agentic AI systems become more prevalent, but the specific content and recommendations are missing.

본문

What is fraud in user research, and why should we care about it? Before we get into the nitty-gritty of how LLMs and agentic AI are affecting our ability to run valid and reliable user research, let’s get a shared understanding of what fraud actually means in the context of research. Participant fraud is when someone pretends to be someone they aren’t to qualify for research when they aren’t actually part of the target demographic. Typically, they don't do this to bung up your study, but because they need or want the income. They may not realize what fake results will do to the study’s reliability or validity; they just want the incentive, which can be a significant amount of money. For some participants, the incentive could be more than they make in a week, a month, or even a year. There are typically two kinds of fraud: the individual “professional tester” and the fraud ring. Folks who are working alone usually rely on distributed networks to learn how to “game” screeners to qualify for studies. They’re often members of several online panels on platforms like Respondent.io, Usertesting.com, etc., and use them as a source of income. Fraud rings are similar, except that they operate as a group, pooling their earnings to share among their members. So, why should we care about this? Well, we use data from user research to make business decisions that shape design direction and strategy. If the data we’re collecting doesn’t accurately reflect the customers and segments we’re hoping to serve, it introduces significant risk to the business. According to dtect, “The cost of preventing fraud up front is much less than dealing with bad data because it leads to rework, lost trust, and poor business decisions.” Fraud in user research isn’t particularly new. We’ve had to be on the lookout for fraudulent participants for quite some time. But if you’re new to user research or UX in general, this may be upsetting news. Before we get into what to do about it, let’s talk a bit about what it looks like in practice, both before and after the introduction of LLMs and agentic AI, so you know what to look out for. Get more valuable content design articles like this one delivered right to your inbox. What did fraud look like before LLMs and agentic AI? Fraud was very manual in the “before times.” Individuals had to work harder to learn how to game screeners. They had to do a lot of their own work to find others who could teach them how to be “professional testers.” Fraud rings had to gather in person, usually in a conference-style room, with multiple devices at the ready. Everything was limited by the number of people and devices you had at your disposal and the amount of time you had. When social media channels like Twitter, TikTok, and YouTube arrived, things got a bit easier, because folks could broadcast their tips and tricks for gaming screeners and getting accepted for studies to a wider audience. More and more creators began making content on how to qualify for research. Individuals and fraud rings also began to use social media to pick up on scheduling links and blast them to their networks. Calendly links, while super useful, were (and are) really vulnerable to being picked up by bad actors when shared publicly. So, how did we catch fraudulent participants back then? There were several reliable signals we could use to determine whether a participant was fraudulent or not: - IP addresses - Browser signals - SMS verification - Speed traps or attention checks - “Human-sounding” responses IP addresses used to be a reliable way to catch someone who wasn’t in your target demographic. If you were looking for folks from Canada but a participant’s IP address was in Germany, for example, it would be an immediate flag that they might not be who they said they were. We could also check for duplicate IP addresses and block duplicates and/or IPs that showed up more than once. This used to be a sign that someone was using different browsers on the same device to complete a study more than once. We could use other browser signals as well, like screen resolution, browser and OS versions, or installed fonts and plugins, to make sure each respondent was a unique user with a genuine identity. SMS verification was also a reliable tool, as it was harder to spoof and therefore great for authentication. Attention checks and speed signals were helpful in making sure you were dealing with an authentic, human response. If the completion time was way too fast or they “failed” the attention check, it was a signal that maybe the respondent was a “speeder,” or someone who was racing through your test just to get to the incentive. And finally, we could look at the “humanness” of the responses. Were the responses relevant to the questions and answered in a sensible way? Was the response coherent, or was it a keysmash (something like “euhfkusdfuhiuw”)? Were there “human” typos, like “teh” instead of “the”? Was the person responding to your questions in

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →