Dishonest research about dishonesty

Get your science fix here: research, quackery, activism and all the rest
User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Dishonest research about dishonesty

Post by sTeamTraen » Fri Aug 20, 2021 10:42 pm

A study of [dis]honesty turns out to have been faked. Post your best pictures of irons here.

The only question is who faked the data: The (now) world-famous researcher with multiple best-selling pop psych/business books who charges $50-100k for a talk and is currently giving evasive and contradictory answers about what happened, or some random clerk in an insurance company who had absolutely no reason to do so?

(Disclosure: I was not involved in the initial investigations, but as of a couple of days ago I have acquired a modest role in the follow-up.)
Something something hammer something something nail

Allo V Psycho
Catbabel
Posts: 734
Joined: Sat Nov 16, 2019 8:18 am

Re: Dishonest research about dishonesty

Post by Allo V Psycho » Sat Aug 21, 2021 10:20 am

sTeamTraen wrote:
Fri Aug 20, 2021 10:42 pm
A study of [dis]honesty turns out to have been faked. Post your best pictures of irons here.

The only question is who faked the data: The (now) world-famous researcher with multiple best-selling pop psych/business books who charges $50-100k for a talk and is currently giving evasive and contradictory answers about what happened, or some random clerk in an insurance company who had absolutely no reason to do so?

(Disclosure: I was not involved in the initial investigations, but as of a couple of days ago I have acquired a modest role in the follow-up.)
From the buzzfeed article:
In a statement to the blog, Francesca Gino, a Harvard Business School professor and one of the authors, wrote, “I was not involved in conversations with the insurance company that conducted the field experiment, nor in any of the steps of running it or analyzing the data.” Another author, Nina Mazar, then at the University of Toronto and now a marketing professor at Boston University, told the blog, “I want to make clear that I was not involved in conducting the field study, had no interactions with the insurance company, and don’t know when, how, or by whom exactly the data was collected and entered. I have no knowledge of who fabricated the data.”
So, why exactly were they authors on the paper?

User avatar
Sciolus
Dorkwood
Posts: 1313
Joined: Mon Nov 11, 2019 6:42 pm

Re: Dishonest research about dishonesty

Post by Sciolus » Sat Aug 21, 2021 6:34 pm

Allo V Psycho wrote:
Sat Aug 21, 2021 10:20 am
So, why exactly were they authors on the paper?
The paper says:
Author contributions: L.L.S., N.M., F.G., D.A., and M.H.B. designed research; L.L.S., F.G., and D.A. performed research; N.M., F.G., and D.A. analyzed data; and L.L.S., N.M., F.G., D.A., and M.H.B. wrote the paper.
So FG is either being untruthful in the paper or in his statement. NM's statement is consistent with the paper.

I ought to be shocked that they claim not to have to original data. If it's not dishonest it's pretty incompetent (how do you even permanently delete an email without it being backed up somewhere?). Unfortunately I have struggled to get my colleagues to realise that it's essential to maintain clean archive copies of all input data to a project, and I'm not surprised that standards of good practice in academia are even worse than in the commercial world.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Dishonest research about dishonesty

Post by Bird on a Fire » Sat Aug 21, 2021 7:52 pm

There were lab studies and field studies, according to the buzzfeed article, so they could have been analysing a different data set. Or a derived dataset from the field study.

One of the authors has changed institutions, which could mean loss of access to old emails.

But yes, data archiving and management practises in academia are often pretty terrible. It's few people's priority and there's little training or institutional support available in a lot of places. Neither my MSc nor PhD have ever touched on it, for example.
We have the right to a clean, healthy, sustainable environment.

User avatar
Sciolus
Dorkwood
Posts: 1313
Joined: Mon Nov 11, 2019 6:42 pm

Re: Dishonest research about dishonesty

Post by Sciolus » Sat Aug 21, 2021 8:34 pm

Fair dos, thanks for making that point.

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sat Aug 21, 2021 8:37 pm

Allo V Psycho wrote:
Sat Aug 21, 2021 10:20 am
So, why exactly were they authors on the paper?
3 of the 5 contributed 2 of the 3 studies. I'm not sure what Mazar added to Ariely's data, other than pointing out that when he "re-entered" it *cough* he got the conditions the wrong way round for the hypotheses. She has been a co-author on a number of previous papers with Ariely, as has Gino.

Anyway, I doubt that borderline inappropriate authorship attribution will figure in the top half of the charge sheet when the house of cards comes down. This story will probably end up being bigger than that of Brian Wansink.
Something something hammer something something nail

Millennie Al
After Pie
Posts: 1621
Joined: Mon Mar 16, 2020 4:02 am

Re: Dishonest research about dishonesty

Post by Millennie Al » Sun Aug 22, 2021 3:50 am

Having looked through the paper, I have some comments on it.

I don't see how the insurance study could have revealed the intended effect even if it had been run perfectly (and this leads me to a guess as to why it was falsified). The customers reported their odometer readings and were supposedly falsifying them in order to get cheaper insurance by appearing to drive less distance. This is not plausible and would have easily observed consequences. The paper mentions that the average mileage in the US is about 12,500. So imagine a customer buys a two year old car which has done 25,000 miles. They buy insurance, reporting the correct mileage. Next year they want to appear to do lower mileage than they really do, so they don't report 37,500, but 35,000. All is ok for now. But next year, the true reading is 50,000. If they deduct the same fixed amount - 2,500 - then they merely pay the correct premium as it appears they have done 12,500 miles and they really have. To get the same advantage this year, they need to deduct 5,000 miles and report 45,000. And so on. Every subsequent year they have to calculate the correct amount to deduct and report a figure which keeps diverging from the true amount. If this was really happening, the insurance company would be delighted! When such a customer had a significant crash and a report on the vehicle was produced to assess whether it's economical to repair, it would show that the odometer reading was sigificantly wrong - and more so the older the car was. This would mean the fraudulent application for insurance was exposed, so the insurance company could refuse to pay out. Insurance companies are always looking for ways to avoid paying claims, so it would become well known if this was happening.

However, if we assume that when the study was being designed the people designing it assumed that customers would report their annual mileage directly, rather than odometer readings, then it starts to make sense. A customer could plausibly deduct 10% every year and if they were challenged that these didn't add up to the value of an odometer in a crashed car, they could plausibly claim that they were only off by a small amount and it was an honest mistake (especially as few people keep meticulous records of their mileage). Presumably, an insurance company was asked to modify their forms for the experiment, but nobody looked at the forms before they were sent out to see how mileage was reported. Once the data had been gathered, somebody realised that it was all useless as it showed the current odometer reading instead of the estimated annual mileage. We can't tell who this was - maybe Dan (as all the authors agree he supplied them with the data) or someone at the insurance company. But they sought to salvage the situation by inventing fictitious second readings.

Another, minor point with regard to the other studies is that it seems odd that having no signature always has less cheating than signature at the bottom. If a signature at the bottom really does have no effect, you'd expect it to appear randomly better or worse than no signature. Maybe it's just chance, since the number of cases is so small.

There's also a problem with the other experiments (apart from the laughably implausible idea that what changes people's behaviour over $1 will change their behaviour when $10,000 or $100,000 is at stake). The paper says that the participants were paid a "$2 show-up fee" and, at the end of their puzzle task, filled out a (fake) tax form to claim payment. However, there's nowhere on the form to put the $2. Surely participants would have noticed that and questioned it. Why didn't the experimenters either include it or mention the mistake in the paper?

And there is a slight chance of a very nasty sting in the tail. In some juristdictions it is tax fraud to falsely tell someone you are deducting tax. It's not clear whether the particpants were told the truth before they went home.

From the paper:
In court cases, witnesses verbally declare their pledge to honesty
before giving their testimonies—not after, perhaps for a reason.
Obviously the reason is to get them to tell the truth. Part of this is by getting them to behave ethically and part by putting them at risk of getting convicted of perjury if they lie, and for the latter you want people to be really clear on when the period start that makes them liable. When filling out a form, there is no possible doubt over when you start doing it, but you're free to make mistakes, cross things out, or simply start over if you feel you need to. Then the signature means that you are finally satisfied that it is all correct. This inherently means you must sign as the last thing you do. Of course that doesn't prevent the physical signature being at any particular place on the form, but its natural place is at the end. If there is a valid phenomenon as described in this paper, I would expect you'd need a signature at both start and end to exploit it while also catching mistakes by triggering people to review the form just before they sign.

User avatar
lpm
Junior Mod
Posts: 5944
Joined: Mon Nov 11, 2019 1:05 pm

Re: Dishonest research about dishonesty

Post by lpm » Sun Aug 22, 2021 9:21 am

https://twitter.com/nntaleb/status/1429298222258958338

Don't know the maths, but the economist charts are pretty amazing.
⭐ Awarded gold star 4 November 2021

User avatar
basementer
Dorkwood
Posts: 1504
Joined: Mon Nov 11, 2019 1:03 pm
Location: 8024, Aotearoa
Contact:

Re: Dishonest research about dishonesty

Post by basementer » Sun Aug 22, 2021 9:58 am

lpm wrote:
Sun Aug 22, 2021 9:21 am
https://twitter.com/nntaleb/status/1429298222258958338

Don't know the maths, but the economist charts are pretty amazing.
They aren't really by The Economist, they both appear in the paper that sTeamTraen linked to https://datacolada.org/98. With colours remapped and the specific explanation of the second chart removed, very sly in an article about dishonesty.
Money is just a substitute for luck anyway. - Tom Siddell

secret squirrel
Snowbonk
Posts: 551
Joined: Wed Nov 13, 2019 12:42 pm

Re: Dishonest research about dishonesty

Post by secret squirrel » Sun Aug 22, 2021 10:17 am

lpm wrote:
Sun Aug 22, 2021 9:21 am
https://twitter.com/nntaleb/status/1429298222258958338

Don't know the maths, but the economist charts are pretty amazing.
Re the maths he's saying the paper reported a uniform distribution, but this would be impossible because of the Central Limit Theorem. This says, roughly, that a distribution of values obtained by taking samples and averaging (or summing) will tend to be normal as the sample size increases, whatever the original distributions looked like. Total miles driven is a sum of miles driven per day, so according to the CLT, the distribution should look normal, and not uniform as reported in the paper.

He's technically not correct that it's a mathematical certainty that the distribution could not be close to uniform, as the CLT is about the limit as sample size tends to infinity, but in practice it would be expected to be normal here (as can be seen in the graph of UK data).

I don't follow his logic for why the distribution had to be uniform based on the reported standard deviation though. Maybe someone else understands that part.

secret squirrel
Snowbonk
Posts: 551
Joined: Wed Nov 13, 2019 12:42 pm

Re: Dishonest research about dishonesty

Post by secret squirrel » Sun Aug 22, 2021 10:21 am

The result reported in the original paper is one of those things that just looks obviously false. Like, even if there were an effect from doing something like that, it would be so thoroughly drowned out by the noise of human existence that any positive result reported from a paper not going to extreme lengths to detect it is far more likely to be experiment error or fraud.

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sun Aug 22, 2021 10:25 am

Millennie Al wrote:
Sun Aug 22, 2021 3:50 am
Having looked through the paper, I have some comments on it.
It's even wronger than what you wrote. There seems to be no suggestion that this was an annual letter or any description of what (dis)benefits might accrue. As the study says, the gap between the initial declaration and the new one was highly variable. (I note that one car was *initially registered* with more than 1 million miles. I don't know how many cars make it that far, or whether the owners of such vehicles change insurance companies very often.)

There are many, many parallels between this story and that of Brian Wansink, with which I was much more involved. Methods that sound cute and/or plausible until you try and play through them in your head and realise that you wouldn't do it like that. No dataset to be found. Multiple retellings of the study for different audiences with slight variations in the method and results, as if someone is having trouble keeping their story straight.

Have a look at this study and think how it might work. In one of his books Ariely tosses off a remark about "a darkened room and a copy of Playboy", which seems self-contradictory.
Something something hammer something something nail

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Dishonest research about dishonesty

Post by Bird on a Fire » Sun Aug 22, 2021 10:27 am

If the paper actually reported that annual mileage was distributed uniform that's so implausible I don't see how it could get past review (if it was reviewed, rather than "communicated", at PNAS?)

He's since added to the thread in response to some comments, back-tracking on the MUST a little bit:
Note: the "MUST" is ~MUST since there are other candidate distributions (gamma, lognormal) but then must be monstrously skewed for STD to be uninformative under v. Large summation.
I'm not even sure why he started off talking about a one-tailed distribution? Why would annual mileage be one-tailed?

ETA sorry, I'm being slow this morning. That's his point, isn't it - that the faked data was faked terribly.
We have the right to a clean, healthy, sustainable environment.

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sun Aug 22, 2021 10:29 am

secret squirrel wrote:
Sun Aug 22, 2021 10:21 am
The result reported in the original paper is one of those things that just looks obviously false. Like, even if there were an effect from doing something like that, it would be so thoroughly drowned out by the noise of human existence that any positive result reported from a paper not going to extreme lengths to detect it is far more likely to be experiment error or fraud.
The entire premise of TED-talk gee-whiz social psychology is that there are large true effects out there which, although they can only be detected by charismatic social scientists in cute experiments that are written up in vague terms so that any specific claim can be denied later, nevertheless can bring success in one's personal or business life using this One Weird Trick in an airport book that sells for a bargain $22.99.
Something something hammer something something nail

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sun Aug 22, 2021 10:30 am

Bird on a Fire wrote:
Sun Aug 22, 2021 10:27 am
If the paper actually reported that annual mileage was distributed uniform that's so implausible I don't see how it could get past review (if it was reviewed, rather than "communicated", at PNAS?)
The paper did not report that. PNAS has very poor reporting of methods and most of their social science is utter garbage.
Something something hammer something something nail

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Dishonest research about dishonesty

Post by Bird on a Fire » Sun Aug 22, 2021 10:33 am

sTeamTraen wrote:
Sun Aug 22, 2021 10:30 am
Bird on a Fire wrote:
Sun Aug 22, 2021 10:27 am
If the paper actually reported that annual mileage was distributed uniform that's so implausible I don't see how it could get past review (if it was reviewed, rather than "communicated", at PNAS?)
The paper did not report that. PNAS has very poor reporting of methods and most of their social science is utter garbage.
But that plot is from the original paper‽ That's mad. I can't imagine having the chutzpah to claim that that's the result you really got from an actual field experiment.
We have the right to a clean, healthy, sustainable environment.

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sun Aug 22, 2021 10:34 am

Bird on a Fire wrote:
Sun Aug 22, 2021 10:27 am
He's since added to the thread in response to some comments, back-tracking on the MUST a little bit:
Note: the "MUST" is ~MUST since there are other candidate distributions (gamma, lognormal) but then must be monstrously skewed for STD to be uninformative under v. Large summation.
I'm not even sure why he started off talking about a one-tailed distribution? Why would annual mileage be one-tailed?

ETA sorry, I'm being slow this morning. That's his point, isn't it - that the faked data was faked terribly.
Ariely's defence is that somebody else faked the data. And of course, now that this has been pointed out, he's using all these big words to show that he understands all the issues, and make us think that he would never have done it so amateurishly. But apparently he never noticed this at the time because, well, hey, we all make mistakes, I guess I'm not a very well-organised guy, can't hold that against me eh, ooh look a squirrel.
Something something hammer something something nail

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sun Aug 22, 2021 10:36 am

Bird on a Fire wrote:
Sun Aug 22, 2021 10:33 am
sTeamTraen wrote:
Sun Aug 22, 2021 10:30 am
Bird on a Fire wrote:
Sun Aug 22, 2021 10:27 am
If the paper actually reported that annual mileage was distributed uniform that's so implausible I don't see how it could get past review (if it was reviewed, rather than "communicated", at PNAS?)
The paper did not report that. PNAS has very poor reporting of methods and most of their social science is utter garbage.
But that plot is from the original paper‽ That's mad. I can't imagine having the chutzpah to claim that that's the result you really got from an actual field experiment.
Something that emerges time after time in these cases is that the people who get caught because they are bad at faking are also bad at understanding what real data looks like. That makes sense, in that people who understand what real data looks like would know to fake better-looking data.

Have a look at the m.st.rbation paper I linked above and think about what the standard deviations, er, mean.
Something something hammer something something nail

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Sun Aug 22, 2021 10:50 am

sTeamTraen wrote:
Sun Aug 22, 2021 10:34 am
Bird on a Fire wrote:
Sun Aug 22, 2021 10:27 am
He's since added to the thread in response to some comments, back-tracking on the MUST a little bit:
Note: the "MUST" is ~MUST since there are other candidate distributions (gamma, lognormal) but then must be monstrously skewed for STD to be uninformative under v. Large summation.
I'm not even sure why he started off talking about a one-tailed distribution? Why would annual mileage be one-tailed?

ETA sorry, I'm being slow this morning. That's his point, isn't it - that the faked data was faked terribly.
Ariely's defence is that somebody else faked the data. And of course, now that this has been pointed out, he's using all these big words to show that he understands all the issues, and make us think that he would never have done it so amateurishly. But apparently he never noticed this at the time because, well, hey, we all make mistakes, I guess I'm not a very well-organised guy, can't hold that against me eh, ooh look a squirrel.
Doh! Just realised the big words are by Taleb, not Ariely. Taleb has blocked me on Twitter so the links earlier didn't work. I found them by logging out.
Something something hammer something something nail

secret squirrel
Snowbonk
Posts: 551
Joined: Wed Nov 13, 2019 12:42 pm

Re: Dishonest research about dishonesty

Post by secret squirrel » Sun Aug 22, 2021 10:53 am

sTeamTraen wrote:
Sun Aug 22, 2021 10:29 am
secret squirrel wrote:
Sun Aug 22, 2021 10:21 am
The result reported in the original paper is one of those things that just looks obviously false. Like, even if there were an effect from doing something like that, it would be so thoroughly drowned out by the noise of human existence that any positive result reported from a paper not going to extreme lengths to detect it is far more likely to be experiment error or fraud.
The entire premise of TED-talk gee-whiz social psychology is that there are large true effects out there which, although they can only be detected by charismatic social scientists in cute experiments that are written up in vague terms so that any specific claim can be denied later, nevertheless can bring success in one's personal or business life using this One Weird Trick in an airport book that sells for a bargain $22.99.
The cottage industry of taking vague statements which are intuitively plausible or even obviously true and spinning them into absurd hacks with extremely specific and palpably absurd demonstrations. E.g. the literature on priming.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Dishonest research about dishonesty

Post by Bird on a Fire » Sun Aug 22, 2021 10:54 am

sTeamTraen wrote:
Sun Aug 22, 2021 10:36 am
Bird on a Fire wrote:
Sun Aug 22, 2021 10:33 am
sTeamTraen wrote:
Sun Aug 22, 2021 10:30 am


The paper did not report that. PNAS has very poor reporting of methods and most of their social science is utter garbage.
But that plot is from the original paper‽ That's mad. I can't imagine having the chutzpah to claim that that's the result you really got from an actual field experiment.
Something that emerges time after time in these cases is that the people who get caught because they are bad at faking are also bad at understanding what real data looks like. That makes sense, in that people who understand what real data looks like would know to fake better-looking data.

Have a look at the m.st.rbation paper I linked above and think about what the standard deviations, er, mean.
I see what you mean there - holy moly. That's either a very homogeneous sample or some hopeless faking. Lovely p values, though.

Also very worrying that reviewers didn't spot it.

Data issues aside, I'm not sure about that "Can you imagine...?" framing of the questions either. Like, you're encouraging people to imagine various sexy things while they have a w.nk in your lab, and then ask them if they can imagine stuff. I'm zero aroused right now but I can imagine having sex with a policeman or a pokemon or a planet - it doesn't mean I want to.
We have the right to a clean, healthy, sustainable environment.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Dishonest research about dishonesty

Post by Bird on a Fire » Sun Aug 22, 2021 10:56 am

sTeamTraen wrote:
Sun Aug 22, 2021 10:50 am
Taleb has blocked me on Twitter
Lolz, is there a good story there?

I enjoyed Antifragile, but I can't help but think that if he'd enjoyed a decent book editor, instead of going off on various random digressions insulting them (inter alia), it could have been half as long as twice as impactful.

He doesn't seem like someone who welcomes criticism, is what I'm saying.
We have the right to a clean, healthy, sustainable environment.

User avatar
basementer
Dorkwood
Posts: 1504
Joined: Mon Nov 11, 2019 1:03 pm
Location: 8024, Aotearoa
Contact:

Re: Dishonest research about dishonesty

Post by basementer » Sun Aug 22, 2021 5:20 pm

Bird on a Fire wrote:
Sun Aug 22, 2021 10:33 am
But that plot is from the original paper‽ That's mad.
It isn't. The paper only reported the statistics, which Taleb could see through; the charts are in the debunking paper that got hold of the supposed source data.
Money is just a substitute for luck anyway. - Tom Siddell

Millennie Al
After Pie
Posts: 1621
Joined: Mon Mar 16, 2020 4:02 am

Re: Dishonest research about dishonesty

Post by Millennie Al » Mon Aug 23, 2021 1:09 am

sTeamTraen wrote:
Sun Aug 22, 2021 10:25 am
Have a look at The Heat of the Moment: The Effect of Sexual Arousal on Sexual Decision Making
and think how it might work. In one of his books Ariely tosses off a remark about "a darkened room and a copy of Playboy", which seems self-contradictory.
There's a fair bit in there that I don't understand or seems wrong. Why the complicated keyboard entry scheme? Just use a mouse! And if you have a "discrete scale with twenty-six steps" why convert that to 0..100 instead of just leaving it? And what is meant by "twenty six steps"? Since the middle value is supposed to be neutral, you need an odd number of steps. Does it mean 0..26? And the labels on the scale are "No...Possibly...Yes", yet the meanings of the questions seem to require it to mean "Dislike ... Neutral ... Like". Why isn't there a screenshot rather than the figure which is quite obviously not the real thing as it has far too little space for the questions? What p.rn was supplied for the pictures? Did the experiment assume that everyone's tastes are the same or include a variety? (Though the answers seem to be more like one person answering 35 times than 35 people answering once, so maybe the tastes were identical). Why is the least popular answer for the aroused state "Can you imagine having sex with a man?" - this means that out of 24 UCB male students none of them was gay or even slightly bi. Note that this answer beats even the questons on a 12-year old girl and animals. How do you know when you're 75% aroused? What does that even mean? Very trusting to allow 35 subjects to take the experimental laptops away for a day. Does that mean there were 35 laptops, or the experiment only used a few subjects at a time? If there was one laptop that would take 71 days, so presumably several were used.

This point may apply to many other papers: it states that particpiants were paid at $10 per session, so that's $710, but the paper does not acknowledge any funding source. Does this mean the authors paid out of their own pockets? Or one of their institutions did? If so, someone might be able to check financial records and verify how much was paid and when (not much chance as the amount is so low and it's so long ago).

I'd have to agree that the standard deviations and p values are most impressive. I'd like to see the underlying data. I'm sure it would be very informative.

And since people seem to have overlooked the link that Sciolus posted above to the first paper, here it is again: Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end

User avatar
sTeamTraen
After Pie
Posts: 2553
Joined: Mon Nov 11, 2019 4:24 pm
Location: Palma de Mallorca, Spain

Re: Dishonest research about dishonesty

Post by sTeamTraen » Mon Aug 23, 2021 8:38 pm

Millennie Al wrote:
Mon Aug 23, 2021 1:09 am
sTeamTraen wrote:
Sun Aug 22, 2021 10:25 am
Have a look at The Heat of the Moment: The Effect of Sexual Arousal on Sexual Decision Making
and think how it might work. In one of his books Ariely tosses off a remark about "a darkened room and a copy of Playboy", which seems self-contradictory.
Why the complicated keyboard entry scheme? Just use a mouse!
The interface was apparently designed to be used by the participant's non-dominant hand, for some reason. :lol:
Millennie Al wrote:
Mon Aug 23, 2021 1:09 am
Why is the least popular answer for the aroused state "Can you imagine having sex with a man?" - this means that out of 24 UCB male students none of them was gay or even slightly bi.
In a write-up of the experiment in one of his sh.tty books, Ariely writes:
The ads we posted around Sproul Plaza read as follows: “Wanted: Male research participants, heterosexual, 18 years-plus, for a study on decision making and arousal.”
It would of course have been nice if that sort of detail had appeared in the actual paper, but Dan is too busy generating fantastic insights to actually bother with rigour.
Something something hammer something something nail

Post Reply