Stats problem - flood events

Discussions about serious topics, for serious people
plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Stats problem - flood events

Post by plodder » Mon Feb 24, 2020 10:12 am

Hi all

Inspired by this tweet:

https://twitter.com/johncurtinEA/status ... 7937047554

What are the baseline odds of 25% of gauges recording record flows in 10 years of a (say) 60 year series? Maybe define "record" as a 1% likelihood annual event?

Obviously you'd expect plenty of sites to have record flows, especially if you waited long enough. I'm struggling to remember my stats in order to get a sense for what "normal" looks like.

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Mon Feb 24, 2020 10:46 am

Depends on how much correlation you expect between gauges.

You can do the wrong but simple thing of calculating as if all the gauges are independent, but that's obviously wrong where gauges share catchments or where their catchments share weather.

User avatar
Gfamily
Light of Blast
Posts: 5233
Joined: Mon Nov 11, 2019 1:00 pm
Location: NW England

Re: Stats problem - flood events

Post by Gfamily » Mon Feb 24, 2020 11:07 am

dyqik wrote:
Mon Feb 24, 2020 10:46 am
Depends on how much correlation you expect between gauges.

You can do the wrong but simple thing of calculating as if all the gauges are independent, but that's obviously wrong where gauges share catchments or where their catchments share weather.
Or where gauges share river basin flood management practices.
There's a lot of noise about EU banning dredging (or making it difficult at least), but it's hard to know whether there's grounds for that being an issue.
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!

plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Re: Stats problem - flood events

Post by plodder » Mon Feb 24, 2020 12:23 pm

Let's not talk about dredging (google CIWEM dredging if you want a technical perspective) and let's assume that the gauges are all independent, because catchment response and rainfall patterns are complex.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 12:32 pm

Gfamily wrote:
Mon Feb 24, 2020 11:07 am
Or where gauges share river basin flood management practices.
There's a lot of noise about EU banning dredging (or making it difficult at least), but it's hard to know whether there's grounds for that being an issue.
The EU hasn't banned dredging, or made it particularly difficult outside of conservation areas. Even within conservation areas dredging is allowed in cases of overriding public interest, which would for example include public safety from avoiding flooding.

The UK government has, however, gutted the Environment Agency, the body responsible for predicting and preventing flooding, who have been pretty clear that dredging isn't the most important issue in every single previous flooding event, and I expect the same holds here.
We have the right to a clean, healthy, sustainable environment.

plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Re: Stats problem - flood events

Post by plodder » Mon Feb 24, 2020 12:33 pm

No it hasn't, and google CIWEM dredging if you want a technical perspective.

User avatar
Gfamily
Light of Blast
Posts: 5233
Joined: Mon Nov 11, 2019 1:00 pm
Location: NW England

Re: Stats problem - flood events

Post by Gfamily » Mon Feb 24, 2020 12:37 pm

plodder wrote:
Mon Feb 24, 2020 12:33 pm
No it hasn't, and google CIWEM dredging if you want a technical perspective.
Thanks - first look suggests it's authoritative and useful.
My avatar was a scientific result that was later found to be 'mistaken' - I rarely claim to be 100% correct
ETA 5/8/20: I've been advised that the result was correct, it was the initial interpretation that needed to be withdrawn
Meta? I'd say so!

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 12:39 pm

plodder wrote:
Mon Feb 24, 2020 12:23 pm
Let's not talk about dredging (google CIWEM dredging if you want a technical perspective) and let's assume that the gauges are all independent, because catchment response and rainfall patterns are complex.
So to be clear on the parameters:
- gauges vary independently
- flooding events vary independently
- a 'record event' has a 1% probability in a given year
- 25% of gauges have recorded a 'record event' in (the most recent?) 10 years of a 60-year dataset

I would probably work this out with stochastic simulations rather than trying to predict it analytically. Rainfall events are presumably distributed Poisson, but flooding might be negative binomial as extreme events are dependent on failure of previously-existing infrastructure?

In fact, given all the complications in understanding the underlying statistical processes, a sensible starting point might be to resample some empirical data - shuffle the records across years to break up any temporal trends and then see how frequently a pattern at least as extreme as that of interest emerges.

If a suitable dataset is available, the code to do this is pretty straightforward....
We have the right to a clean, healthy, sustainable environment.

plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Re: Stats problem - flood events

Post by plodder » Mon Feb 24, 2020 12:46 pm

forget about failure of infrastructure, these are just rare (record) river levels. They tend to use Gumbel distribution IIRC.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 1:04 pm

plodder wrote:
Mon Feb 24, 2020 12:46 pm
forget about failure of infrastructure, these are just rare (record) river levels. They tend to use Gumbel distribution IIRC.
That's handy, thanks - haven't worked with extreme value distributions before, but they *seem* straightforward enough. Will see if I can knock something up.
We have the right to a clean, healthy, sustainable environment.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 1:46 pm

Ok I've had a first go at this.

The code simulates a an annual maximum for 400 gauges over 60 years, then finds what proportion of them had their all-time record in the last decade.

This exercise is repeated 10,000 times and I plot the distribution of simulated proportions, with a red dotted line to indicate the 25% threshold reported in the original tweet.

R code (which I've tried to make readable):

Code: Select all

# extreme flooding simulation for plodder on scrutable.science

library(evd) # package for extreme value distributions

# define function
extreme_flood_sim <- function(gauges=400, years=60) {
  # generatre matrix of random data from Gumbel distribution (location=0, scale=1)
  rmat <- matrix( rgumbel(n=gauges*years), nrow=gauges)
  
  # find which value breaks the all-time record
  record <- apply(rmat, 1, which.max)
  
  # what proportion of these are in the last decade?
  prop <-  sum(record >(years-10)) / gauges
  
  # return this value
  return(prop)
}

# run function to generate distribution of results
Nrep <- 10000 # number of iterations
res <- rep(NA, Nrep) # empty vector to store results

for(i in 1:Nrep) {
  res[i] <- extreme_flood_sim()
}

# plot histogram
hist(res, xlab='Proportion',
     main="Gauges with all-time record in last 10 years")
abline(v=0.25, col='red', lty=2) # add dotted line to indicate 25%

# how many simulations return a value over 0.25?
sum(res>=0.25)

# distribution
mean(res)
fivenum(res) # min, 1st quartile, median, 3rd quantile, max
Results:
Under the simulated scenario, a mean of 16.7% gauges had their all-time record in the last decade (Fig 1).
Fig 1: Histogram of simulation results
Fig 1: Histogram of simulation results
floodresults.png (14.81 KiB) Viewed 4283 times
The range of 10,000 simulations was 10.75-24.5%, with 95% of values falling in the interval 13.75-19.75%, and half between 15.5-18%.

TLDR: the simulated scenario does not seem likely to result in the reported observations.

Caveats:
These results appear to be insensitive to the scale and location parameters of the Gumbel distribution, so I just used the default values of the evd R package (scale=1, location=0). Can't think of a reason why it would be sensitive (Gumbel is, after all, an extreme-value distribution!) but I don't know much about it.

Conclusion: It is very unlikely that a consistent process monitored for 60 years would result in 25% of gauges experiencing extreme events in a given decade. It therefore seems reasonable to conclude that something about the underlying process that generates river-gauge levels has resulted in an increase in extreme measures in recent years. Candidate processes may include precipitation, land-use changes, river management or indeed measurement processes. A more thorough investigation would include weather, land-use and the various relevant spacial dependencies.
We have the right to a clean, healthy, sustainable environment.

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Mon Feb 24, 2020 1:49 pm

Depending on what exactly is meant by the wording, you may want to then work out the probabilities for any contiguous ten year period in the previous 60 years.

I'd also guess (and this really is a guess), that it wouldn't take much correlation between gauges for the peak of that distribution to move up in to the 20-25% range.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 2:04 pm

dyqik wrote:
Mon Feb 24, 2020 1:49 pm
Depending on what exactly is meant by the wording, you may want to then work out the probabilities for any contiguous ten year period in the previous 60 years.
As the simulated "flood levels" are iid any selection of 10 years should give identical results (unless I'm having a total brain fart).
dyqik wrote:
Mon Feb 24, 2020 1:49 pm
I'd also guess (and this really is a guess), that it wouldn't take much correlation between gauges for the peak of that distribution to move up in to the 20-25% range.
At first I thought that was quite likely, but having thought about it more I'm not sure.

Correlation between gauges will increase the probability that they are peaking at the same time as each other, which would probably flatten the distribution somewhat to encompass 25% and make it less improbable. But I don't see how it could shift 25% of extremes into a particular 16.7% of time?


The issue I'm most uncertain about is the length of timeseries for each gauge. Obviously the peak of the distribution is just 1/6, so if we only had an average of 40 years per gauge rather than 60 then the 25% pattern would suddenly become expected. I'm hoping there's an easily-obtained spreadsheet somewhere of when each gauge was functioning, but that might be hoping for too much from an underfunded public infrastructure body.
We have the right to a clean, healthy, sustainable environment.

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Mon Feb 24, 2020 3:29 pm

Bird on a Fire wrote:
Mon Feb 24, 2020 2:04 pm
dyqik wrote:
Mon Feb 24, 2020 1:49 pm
Depending on what exactly is meant by the wording, you may want to then work out the probabilities for any contiguous ten year period in the previous 60 years.
As the simulated "flood levels" are iid any selection of 10 years should give identical results (unless I'm having a total brain fart).
There are 50 contiguous decades in 60 years, and you could say "previous ten years" at the end of _any_ of them. The probabilities are the same for each of them independently, but there are 50 to choose from. So the probability of there being one ten year period in 60 years in which the criterion is reached is a lot higher.

Also, the end of each earlier 10 year period is the final 10 years of a shorter period "since records began" - the probability of 400 gauges having had their records in the past 10 years is 1.0 at the end of the first decade. ;)

Whether that's the right measure depends on the exact wording of the stat, and what would be meaningful to the implications claimed. I don't think it is directly relevant in this case though.

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Mon Feb 24, 2020 3:50 pm

Bird on a Fire wrote:
Mon Feb 24, 2020 2:04 pm
dyqik wrote:
Mon Feb 24, 2020 1:49 pm
I'd also guess (and this really is a guess), that it wouldn't take much correlation between gauges for the peak of that distribution to move up in to the 20-25% range.
At first I thought that was quite likely, but having thought about it more I'm not sure.

Correlation between gauges will increase the probability that they are peaking at the same time as each other, which would probably flatten the distribution somewhat to encompass 25% and make it less improbable. But I don't see how it could shift 25% of extremes into a particular 16.7% of time?
For weak correlations with some correlation length between the sampling points, the correlations effectively reduce the number of independent measurement points. So for a correlation length of 4, we go from 400 gauges to 100 independent gauges (the weak correlation condition of correlation length << N is the same as making the assumption that this is independent of whether that's a 1% correlation between 400 gauges, a 10% correlation between groups of 40 gauges or a 100% correlation between groups of 4 gauges).

See, e.g. https://arxiv.org/pdf/1406.6768.pdf

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Mon Feb 24, 2020 4:37 pm

So for a correlation length of 4, you get N=100, and this histogram:
Floods_corr_4.png
Floods_corr_4.png (6.1 KiB) Viewed 4222 times
and for a correlation length of 8, you get N=50, and this:
Floods_corr_8.png
Floods_corr_8.png (5.96 KiB) Viewed 4222 times

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 5:23 pm

Very interesting! Thanks dyqik. So I expect the non-independence of gauges (which probably shouldn't just be handwaved away) makes this sort of pattern less unlikely - though at the scale of the UK I don't know if correlation alone would make it actually likely.

Of course, gauges are really more of a proxy for some parameter of wider interest, like 'river levels' or something. I don't know enough about how gauges are located to be able to say how we get from one to the other, but doing it properly would be able to account for spatiotemporal autocorrelation.
We have the right to a clean, healthy, sustainable environment.

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Mon Feb 24, 2020 6:48 pm

Bird on a Fire wrote:
Mon Feb 24, 2020 5:23 pm
Very interesting! Thanks dyqik. So I expect the non-independence of gauges (which probably shouldn't just be handwaved away) makes this sort of pattern less unlikely - though at the scale of the UK I don't know if correlation alone would make it actually likely.

Of course, gauges are really more of a proxy for some parameter of wider interest, like 'river levels' or something. I don't know enough about how gauges are located to be able to say how we get from one to the other, but doing it properly would be able to account for spatiotemporal autocorrelation.
My first guess would be that something like 25%-50% of the land area of the country sees significant correlation between rainfall event duration and intensity - a guess not entirely based on visual inspection of http://earth.nullschool.net/ ;) Turning that into correlations between river levels, and hence gauges, is left as an exercise for the student.

Going to a correlation length much greater the 8-ish probably puts you into strong correlation territory, where you probably need the actual distributions and correlations.

plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Re: Stats problem - flood events

Post by plodder » Mon Feb 24, 2020 7:06 pm

These river gauges are spread all over the country. They're not linked in any sort of linear fashion.

(thanks for the input btw!!)

KAJ
Fuzzable
Posts: 310
Joined: Thu Nov 14, 2019 5:05 pm
Location: UK

Re: Stats problem - flood events

Post by KAJ » Mon Feb 24, 2020 7:37 pm

plodder wrote:
Mon Feb 24, 2020 7:06 pm
These river gauges are spread all over the country. They're not linked in any sort of linear fashion.
I live by the River Avon and monitor gauges (here) frequently - my house has been flooded. Within 5 miles of me there are three gauges actually on the Avon and four more on its tributaries.

I don't know what you mean by "not linked in any sort of linear fashion" but my experience tells me these gauges are very highly correlated with each other, and quite highly correlated with those on the Severn.

I spent a few decades building stochastic models. I wouldn't pay any regard to models of the kind discussed in this thread which did not explicitly address correlations.

User avatar
Bird on a Fire
Princess POW
Posts: 10137
Joined: Fri Oct 11, 2019 5:05 pm
Location: Portugal

Re: Stats problem - flood events

Post by Bird on a Fire » Mon Feb 24, 2020 11:45 pm

KAJ wrote:
Mon Feb 24, 2020 7:37 pm
plodder wrote:
Mon Feb 24, 2020 7:06 pm
These river gauges are spread all over the country. They're not linked in any sort of linear fashion.
I live by the River Avon and monitor gauges (here) frequently - my house has been flooded. Within 5 miles of me there are three gauges actually on the Avon and four more on its tributaries.

I don't know what you mean by "not linked in any sort of linear fashion" but my experience tells me these gauges are very highly correlated with each other, and quite highly correlated with those on the Severn.

I spent a few decades building stochastic models. I wouldn't pay any regard to models of the kind discussed in this thread which did not explicitly address correlations.
It's fascinating to hear a river engineer* and a statistics boi* disagree on whether or not this is important in exactly the opposite direction than I would probably have guessed.

*I don't know your exact job descriptions so I'm not even trying here
We have the right to a clean, healthy, sustainable environment.

plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Re: Stats problem - flood events

Post by plodder » Tue Feb 25, 2020 10:08 am

KAJ really sorry to hear you've been flooded, and I hope there's not too much damage.

The problem posed in the OP is not the same as the problem of modelling a specific river catchment. For the problem in the OP I think it's sensible to treat it like a pure stats exercise. River catchment modelling is far more sophisticated - but there's a reason I asked the question in the OP.

Flood defences are designed to accommodate a design flood, for example a flood with a 1% likelihood of occurring in any given year (often slightly incorrectly referred to as a 1:100 year flood). However to define this event you need a baseline - i.e. a flood measured against the last hundred years of records*. However if the next hundred years is going to be very different from the last hundred, or if the hundred years we're halfway through is going to be different from each of them, then this makes the stats potentially very tricky indeed, and it makes it much harder to produce a reliable design (which is already pretty difficult).

So knowing how likely the last decade of record flows are compared to the 60 year total record gives a bit of an indication as to whether we're heading into completely unknown territory or not. This also applies to droughts, by the way, which have a big impact on water resource planning. There are huge investment decisions riding on this stuff, with all the associated impacts they represent, so it's important to get them right.


*in cases where there aren't 100 years of records, e.g. everywhere, stats are extrapolated, datasets are combined and models are validated against real-life events etc.


eta - the detailed river catchment models will assume that a 1% flood event is a good design standard to model. I'm asking "is our confidence in what makes a 1% event changing"? - a question that applies to all river models.

KAJ
Fuzzable
Posts: 310
Joined: Thu Nov 14, 2019 5:05 pm
Location: UK

Re: Stats problem - flood events

Post by KAJ » Tue Feb 25, 2020 10:58 am

Hi Plodder,

Thanks for the concern, my floods were not in the last handful of years.

I think I understand the motivation for the modelling and don't disagree that it can be approached as a statistical exercise.

However, the OP said "What are the baseline odds of 25% of gauges recording record flows in 10 years of a (say) 60 year series?"
If the gauges are highly correlated (I think they are) those odds are very different (probability much higher) than if they are uncorrelated. I don't think you can validly neglect the correlations. I have no strong prior opinion on how to handle them in these circumstances, I can imagine a number of defensible approaches.

Best...,

KAJ

plodder
Stummy Beige
Posts: 2981
Joined: Mon Nov 11, 2019 1:50 pm

Re: Stats problem - flood events

Post by plodder » Tue Feb 25, 2020 11:13 am

take a look at the tweet - it's 400 gauges nationally, and there's quite a spread of records. So whilst I agree that a proper engineer would do it properly, they'd also look at the dataseries themselves to find out if the records were broken by 1mm or 100mm, the duration of the time series, interconnectivity etc.

Perhaps you'll agree that if the gauges have all been connected through the whole time series (assuming they're all the same age etc - again not true), then comparing the last 10 years to the previous 50 will still give an indication of baseline change?

User avatar
dyqik
Princess POW
Posts: 7575
Joined: Wed Sep 25, 2019 4:19 pm
Location: Masshole
Contact:

Re: Stats problem - flood events

Post by dyqik » Tue Feb 25, 2020 11:17 am

As soon as you try to assign meaning to the number of gauges that have reached 1% p.a. extreme events in some time frame, you do have to ask about correlations.

However, the uncorrelated independent case is one extreme of the range of assumptions, so you should ask if it's unusual even in that case, especially as it's easy to answer that question.

That test is a one way kind of question though. It can tell you that a stat like 1/6th of all river gauges showed records in the final decade of a 60 year record is what you'd expect in a steady state (intuitively that's kind of an obvious looking answer, but not necessarily right), but it can't tell you that 1/4 (or 1/12) of gauges showing that is a meaningful indicator of change.

Post Reply