Page 5 of 5

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Sun Dec 08, 2019 11:10 pm
by Squeak
Woodchopper wrote:
Sun Dec 08, 2019 10:43 pm
Catching up with the thread.

Thanks to Fishnut and Ken McKenzie for some excellent posts.
This. Holy hell. That was a seriously impressive set of whataboutery, even by the standards I expect to see in public(ish) discussions of attempts to quantify rates of violence against women. The numbers coming out of social sciences are always rubbery, just like the ones coming out of ecology are. That's because they're studying really complex things. We're not looking for five sigma certainty and these complaints seen to indicate either a severe (if unwitting) or wilful ignorance of the field.

I'm trying to get my head around who would be the appropriate person to have that "right of reply" for the unnamed (#notall?) men whose alleged actions were reported by the anonymised respondents.
A key aspect is that the consumer (the responder) is the sole source of data and the service provider, manufacturer etc is not an equal partner but a potential moral, civil or criminal offender.
Who should be the arbiter of whether a woman did or did not want a thing to be done to her?

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Sun Dec 08, 2019 11:43 pm
by jimbob
Woodchopper wrote:
Sun Dec 08, 2019 10:43 pm
Catching up with the thread.

Thanks to Fishnut and Ken McKenzie for some excellent posts.
Yup, and you. Some serious rebuttals, although Ken Mackenzie was even psychic enough to rebut the future post, which you highlighted.

I get a bit tired of some people's automatic dismissal of social science based on hypothesized methodological flaws.

And I'm just an engineer, so it's not my field.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 12:26 am
by snoozeofreason
Woodchopper wrote:
Sun Dec 08, 2019 10:52 pm
snoozeofreason wrote:
Sun Dec 08, 2019 1:54 pm
Fishnut wrote:
Sat Dec 07, 2019 10:04 am
Anyone who has done research in any field knows there are limitations that affect the robustness of the results.
This is a very important point (speaking as someone who used to work and teach in a field with similar limitations). I think one of the reasons why there is so much heat in discussions like this is that it is a point that tends to get mentioned when figures are questioned, rather than when they are initially presented. Ideally the bar chart in that BBC article would have had a caveat to the effect that while it does indicate that we have something to worry about, we should be wary of using the numbers in the chart to quantify how much we should worry about it. I know that's never going to happen, but discussion of the issue would be much more productive if it did. It's not just that it is harder to have a heated argument about a figure if you have been told from the outset not to invest too much significance in its numerical value. It also makes it harder to draw doubtful conclusions from such figures (for example if you are comparing them with similar figures taken at a different time or place).
A survey like that will usually have an error margin of plus or minus three percentage points.

As Fishnut writes, even if it over estimates the prevalence of, say, unwanted choking, by those three percentage points its still a concerning finding. But its also possible that the survey underestimates the extent of the problem.

Its possible that some form of systematic bias could further skew the results. But IMHO if it existed that would be most likely to involve people not being willing to answer questions about distressing experiences.
I don't disagree with you there, but I think that when Fishnut and other upthread talk about the limitations on the robustness of data in such fields, they aren't necessarily talking about the sort of thing that can be neatly expressed in terms of error margins. I certainly wasn't. As Fishnut points out there is an element of human messiness, and of allowing people to put their own interpretations on questions, that doesn't get completely captured by that sort of neat quantitative analysis.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 1:26 am
by snoozeofreason
Fishnut wrote:
Sun Dec 08, 2019 10:18 pm
snoozeofreason wrote:
Sun Dec 08, 2019 1:54 pm
Fishnut wrote:
Sat Dec 07, 2019 10:04 am
Anyone who has done research in any field knows there are limitations that affect the robustness of the results.
This is a very important point (speaking as someone who used to work and teach in a field with similar limitations). I think one of the reasons why there is so much heat in discussions like this is that it is a point that tends to get mentioned when figures are questioned, rather than when they are initially presented. Ideally the bar chart in that BBC article would have had a caveat to the effect that while it does indicate that we have something to worry about, we should be wary of using the numbers in the chart to quantify how much we should worry about it. I know that's never going to happen, but discussion of the issue would be much more productive if it did. It's not just that it is harder to have a heated argument about a figure if you have been told from the outset not to invest too much significance in its numerical value. It also makes it harder to draw doubtful conclusions from such figures (for example if you are comparing them with similar figures taken at a different time or place).
What figure would you accept as being something not to need worrying about? Personally I think that anything above 0% of non-consensual acts during sex, or indeed during life, is unacceptable and needs to be dealt with, but I'd love to hear your cut-off.

The idea that the numerical value doesn't have significance is b.llsh.t. Of course it does. But no figure is gospel, that's just how science works. I work in marine ecology and my data are messy as f.ck. They are still meaningful but if someone was to go and do the same study using different fish they'll get slightly different data. The conclusions will still be the same (unless I've really f.cked up) but the precise values will be different because it's a different sample. That's basic statistics. That's why we try and make sure our sample sizes are meaningful and unbiased. The survey data has been weighted using accepted methods to ensure the sample isn't biased. They have a large sample size. Sure, if they did the survey again they'd get slightly different figures but so what? This survey found that 8% of women experiences non-consensual slapping, choking, gagging or spitting every time they had sex. If that was 6% or 10% so what? What exactly is this nitpicking supposed to achieve?
Sorry, when I responded to woodchopper, I didn't notice that you had also responded to my post. I think we are talking at cross-purposes, and I have no doubt that this is my fault for not expressing myself clearly. I was trying to agree with you, and I seem to have ended up in an argument with you. I wasn't trying to suggest that there was some cut-off below which we would have nothing to worry about. If anything, I was trying to say that there wasn't. And I wasn't trying to suggest that there was no significance to the numerical value. I did say that we shouldn't invest too much significance in the numerical value, but what I meant by that was pretty much the same as what you mean (or at least I think you mean) when you talk about the messiness of the data.

I think I might be better off bowing out of this discussion. And by the way, the survey of the evidence you gave on the other thread is awesome.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 5:44 am
by Woodchopper
snoozeofreason wrote:
Mon Dec 09, 2019 12:26 am
Woodchopper wrote:
Sun Dec 08, 2019 10:52 pm
snoozeofreason wrote:
Sun Dec 08, 2019 1:54 pm


This is a very important point (speaking as someone who used to work and teach in a field with similar limitations). I think one of the reasons why there is so much heat in discussions like this is that it is a point that tends to get mentioned when figures are questioned, rather than when they are initially presented. Ideally the bar chart in that BBC article would have had a caveat to the effect that while it does indicate that we have something to worry about, we should be wary of using the numbers in the chart to quantify how much we should worry about it. I know that's never going to happen, but discussion of the issue would be much more productive if it did. It's not just that it is harder to have a heated argument about a figure if you have been told from the outset not to invest too much significance in its numerical value. It also makes it harder to draw doubtful conclusions from such figures (for example if you are comparing them with similar figures taken at a different time or place).
A survey like that will usually have an error margin of plus or minus three percentage points.

As Fishnut writes, even if it over estimates the prevalence of, say, unwanted choking, by those three percentage points its still a concerning finding. But its also possible that the survey underestimates the extent of the problem.

Its possible that some form of systematic bias could further skew the results. But IMHO if it existed that would be most likely to involve people not being willing to answer questions about distressing experiences.
I don't disagree with you there, but I think that when Fishnut and other upthread talk about the limitations on the robustness of data in such fields, they aren't necessarily talking about the sort of thing that can be neatly expressed in terms of error margins. I certainly wasn't. As Fishnut points out there is an element of human messiness, and of allowing people to put their own interpretations on questions, that doesn't get completely captured by that sort of neat quantitative analysis.
snooze, that’s fair enough, and no need for you to avoid the thread.

I agree that there’s no need to get hung up on whether the number of women reporting unwanted biting, gagging, hair pulling etc was, say, 37% or 34% or 40%. They would all be concerning.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 8:08 am
by Tessa K
I think I'm a little bit in love with Fishnut.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 8:10 am
by Stephanie
Tessa K wrote:
Mon Dec 09, 2019 8:08 am
I think I'm a little bit in love with Fishnut.
She is bl..dy awesome

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 8:43 am
by El Pollo Diablo
Squeak wrote:
Sun Dec 08, 2019 11:10 pm
I'm trying to get my head around who would be the appropriate person to have that "right of reply" for the unnamed (#notall?) men whose alleged actions were reported by the anonymised respondents.
I'm trying to get my head around why someone would want those who commit sexual assault to have a right of reply. I'm not coming up with any reasons that speak of rex piat in any edifying way.
Who should be the arbiter of whether a woman did or did not want a thing to be done to her?
Gammons, apparently. Maybe her father would be best.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 9:43 am
by Ken McKenzie
Fishnut wrote:
Sun Dec 08, 2019 10:18 pm
The idea that the numerical value doesn't have significance is b.llsh.t. Of course it does. But no figure is gospel, that's just how science works. I work in marine ecology and my data are messy as f.ck. They are still meaningful but if someone was to go and do the same study using different fish they'll get slightly different data.
This is the nub of it, put far more elegantly than I could have put it.

Whenever we get data from social research we have the same discussion about the data - 'illustrative or indicative?' This is essentially the distinction between saying '62% of respondents say that Lancashire County Cricket Club are the world's finest sporting outfit' and 'A lot of respondents say that Lancashire County Cricket Club are the world's finest sporting outfit'.

If your aim is to work out what proportion of the sample venerate Lancs CCC and so are prone to misery and despair between April and September of a given summer, then you want the data to be indicative and have a proportion, but often if you want to explore an issue, merely being able to say with a degree of authority 'a lot of people are sadly infatuated with Lancs' can be very valuable. The difficulty of social research is that even if you get a census sample that meets the strict mathematical criteria required to evade the need to weight (very difficult to do indeed) the next time you take the survey, the results will be different and so in a way an excessive focus on the exact figure is not always helpful.

This is a change in mindset that, in my experience, many scientists can struggle with (and many others thrive with to be fair). It does not mean a lack of rigour or seriousness in the field. It means you are no longer dealing with a discipline where there is a 'correct' answer that, if you work hard enough and design the experiments right, you can eventually solve. There is never an immutably correct answer in social research that you can measure and remain confident that it will stay measured. There isn't an equation that will describe it.

Your language and approach has to be different. It is, to be honest, a fascinating challenge that I think will appeal to a lot of scientists and we could do with more good quants in the field to be honest.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 10:22 am
by snoozeofreason
Ken McKenzie wrote:
Mon Dec 09, 2019 9:43 am
This is a change in mindset that, in my experience, many scientists can struggle with (and many others thrive with to be fair). It does not mean a lack of rigour or seriousness in the field. It means you are no longer dealing with a discipline where there is a 'correct' answer that, if you work hard enough and design the experiments right, you can eventually solve. There is never an immutably correct answer in social research that you can measure and remain confident that it will stay measured. There isn't an equation that will describe it.
That's more or less what I was trying, and evidently failing, to express earlier. I suspect that you have phrased it much better.

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 11:01 am
by Fishnut
Stephanie wrote:
Mon Dec 09, 2019 8:10 am
Tessa K wrote:
Mon Dec 09, 2019 8:08 am
I think I'm a little bit in love with Fishnut.
She is bl..dy awesome
That's very kind of you both! It's made my day :D

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 11:49 am
by Ken McKenzie
snoozeofreason wrote:
Mon Dec 09, 2019 10:22 am
Ken McKenzie wrote:
Mon Dec 09, 2019 9:43 am
This is a change in mindset that, in my experience, many scientists can struggle with (and many others thrive with to be fair). It does not mean a lack of rigour or seriousness in the field. It means you are no longer dealing with a discipline where there is a 'correct' answer that, if you work hard enough and design the experiments right, you can eventually solve. There is never an immutably correct answer in social research that you can measure and remain confident that it will stay measured. There isn't an equation that will describe it.
That's more or less what I was trying, and evidently failing, to express earlier. I suspect that you have phrased it much better.
No, you made a good point well about how data can be imprecise and still useful. I'm looking at something slightly different (but very related).

Re: Survey data discussion split from Male normalisation of sexual violence thread

Posted: Mon Dec 09, 2019 12:22 pm
by Tessa K
El Pollo Diablo wrote:
Mon Dec 09, 2019 8:43 am
Squeak wrote:
Sun Dec 08, 2019 11:10 pm
I'm trying to get my head around who would be the appropriate person to have that "right of reply" for the unnamed (#notall?) men whose alleged actions were reported by the anonymised respondents.
I'm trying to get my head around why someone would want those who commit sexual assault to have a right of reply. I'm not coming up with any reasons that speak of rex piat in any edifying way.
Who should be the arbiter of whether a woman did or did not want a thing to be done to her?
Gammons, apparently. Maybe her father would be best.
They do have a right of reply. In court. In front of a judge and jury. Where they belong.