Survey data discussion split from Male normalisation of sexual violence thread

Discussions about serious topics, for serious people
User avatar
Gfamily
Light of Blast
Posts: 5180
Joined: Mon Nov 11, 2019 1:00 pm
Location: NW England

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Gfamily » Fri Dec 06, 2019 3:46 pm

username wrote:
Fri Dec 06, 2019 3:39 pm
jimbob wrote:
Fri Dec 06, 2019 3:36 pm
username wrote:
Fri Dec 06, 2019 3:06 pm
Yes. The survey result is unreliable. That is the final step being missed.

Of course the subsequent reporting and sensationalistic claims by the bbc exacerbate the problems with the survey. None of this criticism suggests in any way that male sexual violence is not highly problematic, and as I and others have said, more research ought to be done. Reliance on these figures is fundamentally flawed imo.
To echo BOAF - in what way?

Is there some selection bias in the respondents? Is there some bias in the respondents' responses? Is it something else?
There are three pages here expressing problems with phrasing, aggregation etc. There's even a thread on good causes and bad stats over there with a link to a great blog post.
No, I'd say there's a third of a three page thread here asserting problems with phrasing, aggregation and two thirds of a thread saying "not really"
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Bird on a Fire » Fri Dec 06, 2019 3:51 pm

username wrote:
Fri Dec 06, 2019 3:39 pm
jimbob wrote:
Fri Dec 06, 2019 3:36 pm
username wrote:
Fri Dec 06, 2019 3:06 pm
Yes. The survey result is unreliable. That is the final step being missed.

Of course the subsequent reporting and sensationalistic claims by the bbc exacerbate the problems with the survey. None of this criticism suggests in any way that male sexual violence is not highly problematic, and as I and others have said, more research ought to be done. Reliance on these figures is fundamentally flawed imo.
To echo BOAF - in what way?

Is there some selection bias in the respondents? Is there some bias in the respondents' responses? Is it something else?
There are three pages here expressing problems with phrasing, aggregation etc. There's even a thread on good causes and bad stats over there with a link to a great blog post.
Yes, you have been able to point out that there is general uncertainty around human behaviour and their emotional state, and also that we can't disaggregate results by specific behaviours or particular levels of severity.

For example, you posted that we don't know how many times a given behaviour occurred. That's true - but irrelevant to the issue of how many women have experienced it at least once. There is no ambiguity between 'Have experienced' and 'Have not experienced'. The descriptions of the behaviours are clear enough, but could include additional behaviour.

You have also posted that we don't know in exactly what way behaviours were unwanted - was it unwanted like being nagged about household chores, or unwanted like being violently assaulted? Again, we don't know. But there is no ambiguity in the question "how frequently would you say that these acts were unwanted?" nor in the responses "every time / most of the time / some of the time / none of the time"

Taken together, we can AFAICT draw an unambiguous conclusion about the proportion of respondents who had experienced the behaviours as described, and the proportion of those respondents who described the behaviour as unwanted. There really are no ambiguities there.


Where you appear to differ from everybody else is the conclusions you draw from the presence of uncertainty in our understanding of a system. For most people, they are able to see through the uncertainty to the bigger picture - that there is a solid conclusion that can be drawn. Whereas for you, the presence of some uncertainty puts you off the whole endeavour, and you opine that because we can't have greater certainty we can't conclude anything at all.
We have the right to a clean, healthy, sustainable environment.

User avatar
username
Clardic Fug
Posts: 212
Joined: Mon Nov 11, 2019 6:51 pm
Location: The Good Place

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by username » Fri Dec 06, 2019 4:25 pm

Gfamily wrote:
Fri Dec 06, 2019 3:46 pm
username wrote:
Fri Dec 06, 2019 3:39 pm
jimbob wrote:
Fri Dec 06, 2019 3:36 pm


To echo BOAF - in what way?

Is there some selection bias in the respondents? Is there some bias in the respondents' responses? Is it something else?
There are three pages here expressing problems with phrasing, aggregation etc. There's even a thread on good causes and bad stats over there with a link to a great blog post.
No, I'd say there's a third of a three page thread here asserting problems with phrasing, aggregation and two thirds of a thread saying "not really"
Whatever is my initial response; I'm voicing my opinion, you're free to dismiss it if you like and pretend that the uncertainties and ambiguities are not serious enough to preclude a claim that they support the normalization of male sexual violence,you can argue that online (self selecting?) surveys produce reliable conclusions, that the criticism in the first post in this thread about 50% of people have experienced Option C-type things! doesn't matter, but all in all that looks to me like falling for the emotive issue raised and working backwards from that to justify the data.

Journalists write off the back off dubious surveys *all of the time*. Suspending skepticism because a particular topic is fraught does nobody any favours.

I'm not bothering with BOAF's response tbh; he's mischaracterizing (repeatedly) what I've written and I cba.
The half-truths, repeated, authenticated themselves.

User avatar
dyqik
Princess POW
Posts: 7527
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by dyqik » Fri Dec 06, 2019 4:29 pm

username wrote:
Fri Dec 06, 2019 1:09 pm
Getting pretty gung ho with the ad homs against posters.
It's the only valid response to your ad homs.

User avatar
username
Clardic Fug
Posts: 212
Joined: Mon Nov 11, 2019 6:51 pm
Location: The Good Place

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by username » Fri Dec 06, 2019 4:33 pm

dyqik wrote:
Fri Dec 06, 2019 4:29 pm
username wrote:
Fri Dec 06, 2019 1:09 pm
Getting pretty gung ho with the ad homs against posters.
It's the only valid response to your ad homs.
Who did I ad hom?
(asking as I would like to apologize to the individual)
Last edited by username on Fri Dec 06, 2019 4:34 pm, edited 1 time in total.
The half-truths, repeated, authenticated themselves.

User avatar
dyqik
Princess POW
Posts: 7527
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by dyqik » Fri Dec 06, 2019 4:34 pm

username wrote:
Fri Dec 06, 2019 4:25 pm
Gfamily wrote:
Fri Dec 06, 2019 3:46 pm
username wrote:
Fri Dec 06, 2019 3:39 pm

There are three pages here expressing problems with phrasing, aggregation etc. There's even a thread on good causes and bad stats over there with a link to a great blog post.
No, I'd say there's a third of a three page thread here asserting problems with phrasing, aggregation and two thirds of a thread saying "not really"
Whatever is my initial response; I'm voicing my opinion, you're free to dismiss it if you like and pretend that the uncertainties and ambiguities are not serious enough to preclude a claim that they support the normalization of male sexual violence,you can argue that online (self selecting?) surveys produce reliable conclusions, that the criticism in the first post in this thread about 50% of people have experienced Option C-type things! doesn't matter, but all in all that looks to me like falling for the emotive issue raised and working backwards from that to justify the data.

Journalists write off the back off dubious surveys *all of the time*. Suspending skepticism because a particular topic is fraught does nobody any favours.

I'm not bothering with BOAF's response tbh; he's mischaracterizing (repeatedly) what I've written and I cba.
He has not mischaracterized what you've written at any point.

You have asserted that the survey results are unreliable. You have provided no reason or evidence to believe this, just ways in which they could possibly be flawed.

You have not even attempted to show that your concerns have an impact on the conclusions. You've just gish-galloped a bunch of hypothetical flaws without attempting to evaluate the magnitude of the effect they have on the data.

Pointing out potential flaws does not provide a reason to disbelieve the results. You have to show that these potential flaws are strong enough to invalidate the conclusions.

User avatar
dyqik
Princess POW
Posts: 7527
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by dyqik » Fri Dec 06, 2019 4:34 pm

username wrote:
Fri Dec 06, 2019 4:33 pm
dyqik wrote:
Fri Dec 06, 2019 4:29 pm
username wrote:
Fri Dec 06, 2019 1:09 pm
Getting pretty gung ho with the ad homs against posters.
It's the only valid response to your ad homs.
Who did I ad hom?
(asking as I would like to apologize to the individual)
This has already been pointed out to you. Your screengrab is an implied ad hom against the survey designers.

User avatar
username
Clardic Fug
Posts: 212
Joined: Mon Nov 11, 2019 6:51 pm
Location: The Good Place

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by username » Fri Dec 06, 2019 4:36 pm

dyqik wrote:
Fri Dec 06, 2019 4:34 pm
username wrote:
Fri Dec 06, 2019 4:33 pm
dyqik wrote:
Fri Dec 06, 2019 4:29 pm


It's the only valid response to your ad homs.
Who did I ad hom?
(asking as I would like to apologize to the individual)
This has already been pointed out to you. Your screengrab is an implied ad hom against the survey designers.
They're not posters, and it's material to the commissioning of surveys.
The half-truths, repeated, authenticated themselves.

User avatar
dyqik
Princess POW
Posts: 7527
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by dyqik » Fri Dec 06, 2019 4:38 pm

username wrote:
Fri Dec 06, 2019 4:36 pm
dyqik wrote:
Fri Dec 06, 2019 4:34 pm
username wrote:
Fri Dec 06, 2019 4:33 pm

Who did I ad hom?
(asking as I would like to apologize to the individual)
This has already been pointed out to you. Your screengrab is an implied ad hom against the survey designers.
They're not posters, and it's material to the commissioning of surveys.
Irrelevant. You need to show that it actually affected the results for it to be relevant, not raise vague concerns that it might have done.

User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by jimbob » Fri Dec 06, 2019 4:51 pm

Bird on a Fire wrote:
Fri Dec 06, 2019 3:51 pm
username wrote:
Fri Dec 06, 2019 3:39 pm
jimbob wrote:
Fri Dec 06, 2019 3:36 pm


To echo BOAF - in what way?

Is there some selection bias in the respondents? Is there some bias in the respondents' responses? Is it something else?
There are three pages here expressing problems with phrasing, aggregation etc. There's even a thread on good causes and bad stats over there with a link to a great blog post.
Yes, you have been able to point out that there is general uncertainty around human behaviour and their emotional state, and also that we can't disaggregate results by specific behaviours or particular levels of severity.

For example, you posted that we don't know how many times a given behaviour occurred. That's true - but irrelevant to the issue of how many women have experienced it at least once. There is no ambiguity between 'Have experienced' and 'Have not experienced'. The descriptions of the behaviours are clear enough, but could include additional behaviour.

You have also posted that we don't know in exactly what way behaviours were unwanted - was it unwanted like being nagged about household chores, or unwanted like being violently assaulted? Again, we don't know. But there is no ambiguity in the question "how frequently would you say that these acts were unwanted?" nor in the responses "every time / most of the time / some of the time / none of the time"

Taken together, we can AFAICT draw an unambiguous conclusion about the proportion of respondents who had experienced the behaviours as described, and the proportion of those respondents who described the behaviour as unwanted. There really are no ambiguities there.


Where you appear to differ from everybody else is the conclusions you draw from the presence of uncertainty in our understanding of a system. For most people, they are able to see through the uncertainty to the bigger picture - that there is a solid conclusion that can be drawn. Whereas for you, the presence of some uncertainty puts you off the whole endeavour, and you opine that because we can't have greater certainty we can't conclude anything at all.
Bolded bit quoted for truth.

Now there could be methodological flaws, but apart from some hypothetical selection bias or some hypothetical bias in steering people towards a particular set of questions, I struggle to see how that alters the least-extreme interpretation of the data as a lower bound.
Have you considered stupidity as an explanation

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Bird on a Fire » Fri Dec 06, 2019 4:57 pm

jimbob wrote:
Fri Dec 06, 2019 4:51 pm
I struggle to see how that alters the least-extreme interpretation of the data as a lower bound.
That's a good way of putting it, thanks.

It honestly isn't my intention to mischaracterise anybody. I'm going to have to triple-check the thread to see what criticisms have been levelled against the first two questions.
We have the right to a clean, healthy, sustainable environment.

User avatar
username
Clardic Fug
Posts: 212
Joined: Mon Nov 11, 2019 6:51 pm
Location: The Good Place

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by username » Fri Dec 06, 2019 5:07 pm

jimbob wrote:
Fri Dec 06, 2019 4:51 pm
Bird on a Fire wrote:
Fri Dec 06, 2019 3:51 pm
username wrote:
Fri Dec 06, 2019 3:39 pm

There are three pages here expressing problems with phrasing, aggregation etc. There's even a thread on good causes and bad stats over there with a link to a great blog post.
Yes, you have been able to point out that there is general uncertainty around human behaviour and their emotional state, and also that we can't disaggregate results by specific behaviours or particular levels of severity.

For example, you posted that we don't know how many times a given behaviour occurred. That's true - but irrelevant to the issue of how many women have experienced it at least once. There is no ambiguity between 'Have experienced' and 'Have not experienced'. The descriptions of the behaviours are clear enough, but could include additional behaviour.

You have also posted that we don't know in exactly what way behaviours were unwanted - was it unwanted like being nagged about household chores, or unwanted like being violently assaulted? Again, we don't know. But there is no ambiguity in the question "how frequently would you say that these acts were unwanted?" nor in the responses "every time / most of the time / some of the time / none of the time"

Taken together, we can AFAICT draw an unambiguous conclusion about the proportion of respondents who had experienced the behaviours as described, and the proportion of those respondents who described the behaviour as unwanted. There really are no ambiguities there.


Where you appear to differ from everybody else is the conclusions you draw from the presence of uncertainty in our understanding of a system. For most people, they are able to see through the uncertainty to the bigger picture - that there is a solid conclusion that can be drawn. Whereas for you, the presence of some uncertainty puts you off the whole endeavour, and you opine that because we can't have greater certainty we can't conclude anything at all.
Bolded bit quoted for truth.

Now there could be methodological flaws, but apart from some hypothetical selection bias or some hypothetical bias in steering people towards a particular set of questions, I struggle to see how that alters the least-extreme interpretation of the data as a lower bound.
If one subscribes to the notion that unwanted is a completely unambiguous term then one is likely to put a lot of faith into the least extreme interpretation as the lower bound.
The half-truths, repeated, authenticated themselves.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Bird on a Fire » Fri Dec 06, 2019 5:32 pm

It's ambiguous, sure, or at least it covers a range of possibilities.

I don't think many of those possibilities are totally unproblematic, though.
We have the right to a clean, healthy, sustainable environment.

Ken McKenzie
Sindis Poop
Posts: 72
Joined: Tue Nov 12, 2019 12:59 pm

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Ken McKenzie » Fri Dec 06, 2019 10:25 pm

username wrote:
Fri Dec 06, 2019 10:27 am
Pressured is the first in the list of terms and most likely to get an ambiguous interpretation*. The more I think about the survey the more it appears designed to garner big numbers. Professional surveyors know how to manipulate likely responses.
Ok, let's up the ante. You've claimed you aren't ad homming because conveniently there are no professional surveyors reading. Hi, my name is(n't) Ken. I'm a professional surveyor. I know how to manipulate likely responses, but like every professional surveyor I don't do it - and we try very, very hard not to do it even unconsciously - because it is profoundly unethical and it makes our results worthless.

And when you've written a decent survey and actually recruited a sample (oh my God you have never, ever done difficult research until you've recruited a large qual sample, collected the responses, coded the responses and then had to weight the sample), deliberately going out of your way to make the data worthless is so far beyond inexplicable as to require new language. Social research is expensive and it is difficult and so rather than be incentivised to cut corners, you're incentivised to do it properly so you only have to do it once.

All surveys have an element of asking the respondents to use their interpretation. A steering group I am part of spent 3 months discussing how respondents might interpret the word 'happy' and how we might then use the results. And this was for a question that had been extensively tested by the ONS. We concluded that the only way to collect this data is to let the respondents be the judge and that this data would tell us something useful no matter how people interpreted it. You never get to tell survey respondents how they feel about things. You ask them.

The researchers will have thought about all of these issues - a lot - and quibbled - a lot - about the wording, coding and interpretation. In this case, we used a commercial survey company to administer the survey (not ComRes) because they could handle the sample size, volume of responses and complex coding required in the timeframe the data was needed. Incidentally, this was done online because it's a great way to contact people. However, like most professional surveys the respondents are not self-selected. You've no reason to believe this sample was self-selected. The Internet is just an easier way to contact people than the phone.

What you're essentially doing here is taking some data you don't like, and ascribing the worst possible motives to both the commissioners and the survey organisation. ComRes are MRS members, an actual accredited professional body with actual professional standards - so in stating that a high-profile, well-known survey organisation have publicly breached MRS rules and the MRS have done nothing, you are essentially stating that the professional body itself is corrupt. There are many reasons to want a big splash on your work - especially when you believe it is important and the public rather than just your own circle of contacts should read it (although I will say I spend rather more time than I'd like arguing about the titles of our press releases).

Incidentally, as far as I can tell the quote you cite pertains to research into how long waiting times for mental health professionals impact mental health - part of the justification for a campaign to recruit more mental health professionals - and is literally nothing whatsoever to do with the research you don't like. How dare ComRes print endorsements! It's like they're trying to get business or something!

When you start suggesting that there's a broad-based conspiracy to produce data you don't like that requires the participation of the commissioning organisation, one of the country's largest professional survey organisations and the professional body that oversees standards in order to function, you're not a sceptic any more. You're a conspiracy theorist.

It simply is not worth it to any of the organisations involved making sh.t up, and especially not on a topic like this where a section of the population will not like the results at all and will discredit them by any means necessary. ComRes have absolutely no desire to devalue their professional reputation by conducting shoddy or unethical research. (You have no idea how competitive this marketplace is). The MRS have absolutely no incentive whatsoever to allow their professional standards to be breached.

This quibbling over the ethics and methodologies of research you don't like is the same method used by anti-vaxxers and climate deniers. You need all the people involved to be corrupt and stupid for it to work. At root, I would like to think that the reason you've got this so wrong is that you think survey work is pretty easy, when in fact it is very difficult and complex. Let's all act as if that's the reason and then we can get back to the topic.

tl;dr survey research is very hard and the researchers will have thought of all this already. The rewards for being corrupt and stupid are far outweighed in this field by the rewards for being professional and reliable.

User avatar
username
Clardic Fug
Posts: 212
Joined: Mon Nov 11, 2019 6:51 pm
Location: The Good Place

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by username » Sat Dec 07, 2019 12:12 am

What you're essentially doing here is taking some data you don't like
Yah see, a lot of what you write makes sense, (although I don't see why a reputable survey company would aggregate a sometimes negative/sometimes as negative, for example, unless it is either to build a big number for a story or some kind of error) but it's not data I don't like; it's data I don't trust.

There seems to be a rush to conclude I disagree with notion that male sexual violence is problematic. Nothing could be further from the truth. Keenness to see robust data with reasonable inferences drawn from it (rather than the hyperbolic headline rhetoric as seen in the bbc piece) is normal. Maybe an FOI request to bbc to get the comres brief is in order, idk.
The half-truths, repeated, authenticated themselves.

User avatar
Fishnut
After Pie
Posts: 2447
Joined: Mon Nov 11, 2019 1:15 pm
Location: UK

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Fishnut » Sat Dec 07, 2019 10:04 am

username wrote:
Thu Dec 05, 2019 10:09 pm
I'm not surprised nor shocked such a questionable dataset was used by the bbc to suggest sexual violence was being normalized.
This has gone on long enough. Anyone who has done research in any field knows there are limitations that affect the robustness of the results. I don't think there's any study in existence that couldn't be improved with better resources or methodologies. But if we waited until we had everything perfect no research would ever get done. Time and money are the biggest limitations but for social sciences there are also the limitations of humans being annoyingly messy. And when discussing intimate issues such as, I don't know, non-consensual acts during sex, people are weirdly reticent to speak to others about this. God knows why. It certainly isn't because of the massive amount of stigma, victim blaming and downplaying of their experiences that they experience.

Researchers do the best they can. This study is just one of a multitude of studies on the violence women face every day. It is adding a piece to the puzzle but that piece fits right in to the current understanding of the situation. Do you know how I know this? Partly because I've actually gone and had a look at the literature (see my post here for a very brief summary) but mostly because every single person here has treated this as an issue of male violence against female partners yet nowhere in the survey did it ask for the sexuality of the women or the gender of their sexual partners. Everyone has assumed that the survey is talking about heterosexual relationships and not just because those are the dominant relationship type. It's because male violence against women is normal.

I don't think it's any surprise that the survey was done when it was - it was published a few days after the UN's International Day for the Elimination of Violence Against Women. The UN thinks there's a problem, all the data shows that there's a problem, yet username thinks that nitpicking this study will somehow cause the whole edifice to tumble down. He'll probably accuse me of misunderstanding his intentions or twisting his words but as a woman reading his comments it is clear that whether he means to or not he is trying to minimise our experiences. A search through the other threads in this sub-forum shows that no other subject has caught his attention in the way that this minor news story has. He's quite happy to agree with the idea of Mass Shooting Contagion without any discussion of the validity of this theory. He's quite happy to go with the consensus that prisons are detrimental. Even in the potentially contentious discussion of building cladding he shows restraint. Yet strangely when issues relating to women come up he pops up like an even more annoying Clippy with his quibbling and nitpicking, claiming he just wants us to be using robust data. Do gender studies have a unique problem with robust data that only username is able to identify? Or do they just affront his desire for us women to stop complaining and accept that now that we have equal rights under the law feminism isn't needed any more?
it's okay to say "I don't know"

User avatar
jimbob
Light of Blast
Posts: 5276
Joined: Mon Nov 11, 2019 4:04 pm
Location: High Peak/Manchester

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by jimbob » Sat Dec 07, 2019 5:11 pm

Ken McKenzie wrote:
Fri Dec 06, 2019 10:25 pm

<snip>
All surveys have an element of asking the respondents to use their interpretation. A steering group I am part of spent 3 months discussing how respondents might interpret the word 'happy' and how we might then use the results. And this was for a question that had been extensively tested by the ONS. We concluded that the only way to collect this data is to let the respondents be the judge and that this data would tell us something useful no matter how people interpreted it. You never get to tell survey respondents how they feel about things. You ask them.

The researchers will have thought about all of these issues - a lot - and quibbled - a lot - about the wording, coding and interpretation. In this case, we used a commercial survey company to administer the survey (not ComRes) because they could handle the sample size, volume of responses and complex coding required in the timeframe the data was needed. Incidentally, this was done online because it's a great way to contact people. However, like most professional surveys the respondents are not self-selected. You've no reason to believe this sample was self-selected. The Internet is just an easier way to contact people than the phone.

<snip>


This quibbling over the ethics and methodologies of research you don't like is the same method used by anti-vaxxers and climate deniers. You need all the people involved to be corrupt and stupid for it to work. At root, I would like to think that the reason you've got this so wrong is that you think survey work is pretty easy, when in fact it is very difficult and complex. Let's all act as if that's the reason and then we can get back to the topic.

tl;dr survey research is very hard and the researchers will have thought of all this already. The rewards for being corrupt and stupid are far outweighed in this field by the rewards for being professional and reliable.
Thanks.

I was wondering if you or Warmunch would weigh in with actual expertise, pointing out that most of the methodological issues are more well-known to the professional social scientists than physical scientists looking at headlines - or even actual results.
Have you considered stupidity as an explanation

User avatar
snoozeofreason
Snowbonk
Posts: 485
Joined: Fri Nov 15, 2019 1:22 pm

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by snoozeofreason » Sun Dec 08, 2019 1:54 pm

Fishnut wrote:
Sat Dec 07, 2019 10:04 am
Anyone who has done research in any field knows there are limitations that affect the robustness of the results.
This is a very important point (speaking as someone who used to work and teach in a field with similar limitations). I think one of the reasons why there is so much heat in discussions like this is that it is a point that tends to get mentioned when figures are questioned, rather than when they are initially presented. Ideally the bar chart in that BBC article would have had a caveat to the effect that while it does indicate that we have something to worry about, we should be wary of using the numbers in the chart to quantify how much we should worry about it. I know that's never going to happen, but discussion of the issue would be much more productive if it did. It's not just that it is harder to have a heated argument about a figure if you have been told from the outset not to invest too much significance in its numerical value. It also makes it harder to draw doubtful conclusions from such figures (for example if you are comparing them with similar figures taken at a different time or place).
In six days the Lord made the heavens and the earth, the sea, and all that is in them. The human body was knocked up pretty late on the Friday afternoon, with a deadline looming. How well do you expect it to work?

Rex Piat
Ghastly Pink
Posts: 4
Joined: Fri Nov 29, 2019 3:31 pm

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Rex Piat » Sun Dec 08, 2019 5:15 pm

I'd seen the thread had been split and pitted, now just catching up with this resurrection. The question remains as to what the facility of the BBC/Comres survey is, this is irrespective of the quality of the Savanta/Comres work, although that work can only be understood in terms of the commissioner's (the BBC producer/editor) priorities. The fact that something may be measured doesn't make the measurement appropriate to the sought data.

The structure of the survey is not uncommon, it is of a kind used by campaigners, particularly consumer advocates to identify poor commercial practices, poor service or inadequate or dangerous products. A key aspect is that the consumer (the responder) is the sole source of data and the service provider, manufacturer etc is not an equal partner but a potential moral, civil or criminal offender.

Unsurprisingly this type of survey is very much favoured by media organisations, either commissioned directly, featured as the work of campaigners or merely accrued for clickbating. The (claimed) testimonial evidence of the customer can be used to identify enemies, friends and promote moral outrage and/or panic, all of which makes good copy/screen time.

The BBC/Comres survey was in effect a consumer report in which responders were given an opportunity to voice a complaint about service failings, moral deficiencies of the provider, delivery incompetence and/or legal (including criminal) deviations (!!!) from the (unspoken) contract with potential impacts on health and well being for the consumer. An important limitation on media use of such surveys is that a right of reply is required, at industry level this may be provided by Trade Associations or similar, otherwise individual offending businesses are given an opportunity to respond (refute, statement from lawyers, abject apology etc). In the particular case of course no right of reply is feasible so a key element of media balance was missing. Comres are not responsible here, but their survey is necessarily deficient in the balancing data.

The question then becomes what was the survey intended to do ? On the face of it, given that there was no possibility of right of reply, it was a deliberately imbalanced enterprise designed to create good copy and manipulate outrage. Again the responsibility lies with the commissioner, but the survey in itself remains deficient.

What might have been an alternative ? If there was a genuine interest in establishing insight into current sexual practices where unsought, unacceptable or harmful aggression may have been involved then some reference to a century of sexology studies might have been a starting point. To have validity a survey would need to gauge (given this is about heterosexual interaction) the experiences of both females and males by posing the same questions to age and gender matched participants. The question structure would need to encompass both passive and active interactions by both females and males, rather than the abysmal structure of the BBC/Comres questionnaire which only allows women to have been passive recipients of actions by males.*

Additionally a questionnaire would need to distinguish between long and short term (including one off encounters) relationships, levels of pre and post sex communication, the role of intoxication in sexual behaviours, the permissibility within those relationships of different behaviours over time, and critically a means of grading the qualities of specific actions – achievable by using rating or intensity scales. Only at this level of detail would the ambiguities of the BBC/Comres survey be avoided and interpret-able data yielded, producing something that might be of value beyond simply exciting confirmation bias.

*I'm amazed that any thinking person would find a survey predicated on women being passive receivers of male actions would be in anyway acceptable, or that the characterisation of hetero-sexuality in terms of purely passive/active gender roles would be thought to tell us anything of value.

User avatar
Stephanie
Stummy Beige
Posts: 2896
Joined: Wed Sep 25, 2019 4:38 pm
Location: clinging tenaciously to your buttocks

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Stephanie » Sun Dec 08, 2019 9:47 pm

Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
The BBC/Comres survey was in effect a consumer report in which responders were given an opportunity to voice a complaint about service failings, moral deficiencies of the provider, delivery incompetence and/or legal (including criminal) deviations (!!!) from the (unspoken) contract with potential impacts on health and well being for the consumer. An important limitation on media use of such surveys is that a right of reply is required, at industry level this may be provided by Trade Associations or similar, otherwise individual offending businesses are given an opportunity to respond (refute, statement from lawyers, abject apology etc). In the particular case of course no right of reply is feasible so a key element of media balance was missing. Comres are not responsible here, but their survey is necessarily deficient in the balancing data.
"consumer report"? "Service failings"? "delivery incompetence"?

honestly, if you want to critique the survey, actually do it and don't write such utter bobbins.
"I got a flu virus named after me 'cause I kissed a bat on a dare."

User avatar
Fishnut
After Pie
Posts: 2447
Joined: Mon Nov 11, 2019 1:15 pm
Location: UK

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Fishnut » Sun Dec 08, 2019 10:18 pm

snoozeofreason wrote:
Sun Dec 08, 2019 1:54 pm
Fishnut wrote:
Sat Dec 07, 2019 10:04 am
Anyone who has done research in any field knows there are limitations that affect the robustness of the results.
This is a very important point (speaking as someone who used to work and teach in a field with similar limitations). I think one of the reasons why there is so much heat in discussions like this is that it is a point that tends to get mentioned when figures are questioned, rather than when they are initially presented. Ideally the bar chart in that BBC article would have had a caveat to the effect that while it does indicate that we have something to worry about, we should be wary of using the numbers in the chart to quantify how much we should worry about it. I know that's never going to happen, but discussion of the issue would be much more productive if it did. It's not just that it is harder to have a heated argument about a figure if you have been told from the outset not to invest too much significance in its numerical value. It also makes it harder to draw doubtful conclusions from such figures (for example if you are comparing them with similar figures taken at a different time or place).
What figure would you accept as being something not to need worrying about? Personally I think that anything above 0% of non-consensual acts during sex, or indeed during life, is unacceptable and needs to be dealt with, but I'd love to hear your cut-off.

The idea that the numerical value doesn't have significance is b.llsh.t. Of course it does. But no figure is gospel, that's just how science works. I work in marine ecology and my data are messy as f.ck. They are still meaningful but if someone was to go and do the same study using different fish they'll get slightly different data. The conclusions will still be the same (unless I've really f.cked up) but the precise values will be different because it's a different sample. That's basic statistics. That's why we try and make sure our sample sizes are meaningful and unbiased. The survey data has been weighted using accepted methods to ensure the sample isn't biased. They have a large sample size. Sure, if they did the survey again they'd get slightly different figures but so what? This survey found that 8% of women experiences non-consensual slapping, choking, gagging or spitting every time they had sex. If that was 6% or 10% so what? What exactly is this nitpicking supposed to achieve?
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
Again the responsibility lies with the commissioner, but the survey in itself remains deficient.
What deficiencies are there that nullify the conclusion that women are experiencing non-consensual violent and degrading acts during sex?
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
If there was a genuine interest in establishing insight into current sexual practices where unsought, unacceptable or harmful aggression may have been involved then some reference to a century of sexology studies might have been a starting point.
How do you know this hasn't been done? Just because the news article doesn't reference the extensive literature on the subject doesn't mean those writing it aren't aware of it.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
To have validity a survey would need to gauge (given this is about heterosexual interaction) the experiences of both females and males by posing the same questions to age and gender matched participants. The question structure would need to encompass both passive and active interactions by both females and males, rather than the abysmal structure of the BBC/Comres questionnaire which only allows women to have been passive recipients of actions by males.
Why? You are like the reviewer who gets a paper and tells them they should have studied something else instead. This was a survey about women experiencing violence during sex. Sure it would be interesting to know about men's experiences but that's a different question and not the one being asked here. I don't even know what you mean about passive and active interactions - I'd assume that sex was an active interaction and if one party is being passive then it suggests they are not really into the proceedings and maybe they should cease.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
Additionally a questionnaire would need to distinguish between long and short term (including one off encounters) relationships, levels of pre and post sex communication, the role of intoxication in sexual behaviours, the permissibility within those relationships of different behaviours over time, and critically a means of grading the qualities of specific actions – achievable by using rating or intensity scales. Only at this level of detail would the ambiguities of the BBC/Comres survey be avoided and interpretable data yielded, producing something that might be of value beyond simply exciting confirmation bias.
Again, why? These are "further study" questions. They would add detail and would help to determine where to focus any campaigns or attempts to reduce this violence but wouldn't change the conclusions of this piece.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
*I'm amazed that any thinking person would find a survey predicated on women being passive receivers of male actions would be in anyway acceptable, or that the characterisation of hetero-sexuality in terms of purely passive/active gender roles would be thought to tell us anything of value.
Er, what? The survey is predicated on the fact that women are known to experience violence at the hands of men far more than men are at the hands of women. Tjadan & Theonnes (2000) (behind paywall) did a telephone survey of 16,000 people in the US and found that,
married/cohabiting women reported significantly more intimate perpetrated rape, physical assault, and stalking than did married/cohabiting men, whether the time period considered was the respondent's lifetime or the 12 months preceding the survey. Women also reported more frequent and longer lasting victimization, fear of bodily injury, time lost from work, injuries, and use of medical, mental health, and justice system services.
Walby et al. (2017) in their book "The Concept and Measurement of Violence Against Women and Men" point out that,
Lethal violence is gendered. Globally, 95% of perpetrators of intentional homicide are male. Every year, intimate partners or family members perpetrate nearly 64,000 intentional homicides; two thirds of victims are female. Half the intentional homicides of women are perpetrated by an intimate partner or other family members, compared to 6% of intentional homicides of men.
The World Health Organisation's first World Report on Violence and Health in 2002 notes that,
Although women can be violent in relationships with men, and violence is also sometimes found in same-sex partnerships, the overwhelming burden of partner violence is borne by women at the hands of men.
and goes on to dedicate an entire chapter of the report to intimate partner violence against women by men. The survey is not insulting by suggesting that women are passive receivers of men's actions, the survey is reflecting the awful reality that far too many women are still being treated horribly by the men who are closest to them.

So I ask you, what critiques do you have that nullify the conclusions of the survey? Show me how this survey contradicts the established literature on this topic or why it shouldn't be taken as what the rest of us see it as - another piece of the puzzle that adds to the overwhelming consensus that far too many men treat women like sh.t. Go to Google Scholar and do some f.cking reading on the subject and come back with references to back up your claims. And if you can't do that then shut the f.ck up because this tedious whataboutery is getting old.
it's okay to say "I don't know"

User avatar
discovolante
Stummy Beige
Posts: 4084
Joined: Fri Oct 11, 2019 5:10 pm

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by discovolante » Sun Dec 08, 2019 10:31 pm

Unfortunately, I don't have Fishnut's research skillz but I have to say, my jaw slowly dropped in horror as I read through your last post, Rex Piat. I am not quite sure why you are dedicating such a significant proportion of time picking holes in things that have (now) been addressed by Fishnut and complaining that a survey hasn't addressed issues that it doesn't even claim to be trying to address. It's not even a discussion of the data that *is* there.
To defy the laws of tradition is a crusade only of the brave.

User avatar
Woodchopper
Princess POW
Posts: 7057
Joined: Sat Oct 12, 2019 9:05 am

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Woodchopper » Sun Dec 08, 2019 10:37 pm

Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
The question remains as to what the facility of the BBC/Comres survey is, this is irrespective of the quality of the Savanta/Comres work, although that work can only be understood in terms of the commissioner's (the BBC producer/editor) priorities. The fact that something may be measured doesn't make the measurement appropriate to the sought data.

The structure of the survey is not uncommon, it is of a kind used by campaigners, particularly consumer advocates to identify poor commercial practices, poor service or inadequate or dangerous products. A key aspect is that the consumer (the responder) is the sole source of data and the service provider, manufacturer etc is not an equal partner but a potential moral, civil or criminal offender.

Unsurprisingly this type of survey is very much favoured by media organisations, either commissioned directly, featured as the work of campaigners or merely accrued for clickbating. The (claimed) testimonial evidence of the customer can be used to identify enemies, friends and promote moral outrage and/or panic, all of which makes good copy/screen time.

The BBC/Comres survey was in effect a consumer report in which responders were given an opportunity to voice a complaint about service failings, moral deficiencies of the provider, delivery incompetence and/or legal (including criminal) deviations (!!!) from the (unspoken) contract with potential impacts on health and well being for the consumer.
This is all complete nonsense.

Such surveys have been around since the 1930s and the methodology is well established. They are used by governments, academics, political parties and companies, and others. As Ken McKenzie mentioned, it doesn't matter who commissioned the survey, as if its done properly the people doing the survey will use those well established methods (depending upon what information needs to be found out).

As Ken mentioned, ComRes is widely used in academic research, by people citing its surveys or commissioning them directly. See here for examples.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
An important limitation on media use of such surveys is that a right of reply is required, at industry level this may be provided by Trade Associations or similar, otherwise individual offending businesses are given an opportunity to respond (refute, statement from lawyers, abject apology etc). In the particular case of course no right of reply is feasible so a key element of media balance was missing. Comres are not responsible here, but their survey is necessarily deficient in the balancing data.
Which is irrelevant.

A right of reply only exists concerning a named entity company or individual. For example if a survey showed that a particular company had the worse customer service record.

There is no need at all for a right of reply concerning responses that are anonymised (eg the readers of the survey have no idea of the identity of the people being surveyed or the men who choked, bit or slapped them.

Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
The question then becomes what was the survey intended to do ?
On the face of it, given that there was no possibility of right of reply, it was a deliberately imbalanced enterprise designed to create good copy and manipulate outrage. Again the responsibility lies with the commissioner, but the survey in itself remains deficient.
Nonsense again. There is no need for a right of reply in an anonymized survey.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
What might have been an alternative ? If there was a genuine interest in establishing insight into current sexual practices where unsought, unacceptable or harmful aggression may have been involved then some reference to a century of sexology studies might have been a starting point. To have validity a survey would need to gauge (given this is about heterosexual interaction) the experiences of both females and males by posing the same questions to age and gender matched participants. The question structure would need to encompass both passive and active interactions by both females and males, rather than the abysmal structure of the BBC/Comres questionnaire which only allows women to have been passive recipients of actions by males.*
Nonsense again. To start with, given the subject matter, criminology would be an equally valid field.

Second, assessing the prevalence of specific forms of victimization among defined sub-groups within a population is a very well established academic practice, and such a survey is an appropriate method to do so.

Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
Additionally a questionnaire would need to distinguish between long and short term (including one off encounters) relationships, levels of pre and post sex communication, the role of intoxication in sexual behaviours, the permissibility within those relationships of different behaviours over time, and critically a means of grading the qualities of specific actions – achievable by using rating or intensity scales.
Those would be interesting subjects for follow-up studies. But they aren't necessary to draw conclusions from the survey under discussion.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
Only at this level of detail would the ambiguities of the BBC/Comres survey be avoided and interpret-able data yielded, producing something that might be of value beyond simply exciting confirmation bias.
Complete nonsense again. Questions at the level of specificity are already widely used in order to study criminal victimization.
Rex Piat wrote:
Sun Dec 08, 2019 5:15 pm
*I'm amazed that any thinking person would find a survey predicated on women being passive receivers of male actions would be in anyway acceptable, or that the characterisation of hetero-sexuality in terms of purely passive/active gender roles would be thought to tell us anything of value.
Odd, as the survey in question didn't show that.

User avatar
Woodchopper
Princess POW
Posts: 7057
Joined: Sat Oct 12, 2019 9:05 am

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Woodchopper » Sun Dec 08, 2019 10:43 pm

Catching up with the thread.

Thanks to Fishnut and Ken McKenzie for some excellent posts.

User avatar
Woodchopper
Princess POW
Posts: 7057
Joined: Sat Oct 12, 2019 9:05 am

Re: Survey data discussion split from Male normalisation of sexual violence thread

Post by Woodchopper » Sun Dec 08, 2019 10:52 pm

snoozeofreason wrote:
Sun Dec 08, 2019 1:54 pm
Fishnut wrote:
Sat Dec 07, 2019 10:04 am
Anyone who has done research in any field knows there are limitations that affect the robustness of the results.
This is a very important point (speaking as someone who used to work and teach in a field with similar limitations). I think one of the reasons why there is so much heat in discussions like this is that it is a point that tends to get mentioned when figures are questioned, rather than when they are initially presented. Ideally the bar chart in that BBC article would have had a caveat to the effect that while it does indicate that we have something to worry about, we should be wary of using the numbers in the chart to quantify how much we should worry about it. I know that's never going to happen, but discussion of the issue would be much more productive if it did. It's not just that it is harder to have a heated argument about a figure if you have been told from the outset not to invest too much significance in its numerical value. It also makes it harder to draw doubtful conclusions from such figures (for example if you are comparing them with similar figures taken at a different time or place).
A survey like that will usually have an error margin of plus or minus three percentage points.

As Fishnut writes, even if it over estimates the prevalence of, say, unwanted choking, by those three percentage points its still a concerning finding. But its also possible that the survey underestimates the extent of the problem.

Its possible that some form of systematic bias could further skew the results. But IMHO if it existed that would be most likely to involve people not being willing to answer questions about distressing experiences.

Post Reply