VALID has four easy to understand traffic light coloured risk ratings, and this is where they sit in the Tolerability of Risk Framework (ToR).
The Tolerability of Risk Framework is an internationally recognised approach to making risk management decisions where the risk is imposed on the public.
The ToR triangle gets fatter and redder where more attention and resources should be allocated to managing the risk. It gets thinner and greener where less attention and resources should be allocated.
Where ToR is amber the risk is Tolerable if it’s ‘as low as reasonably practicable’ (ALARP) - where the costs of the risk reduction are much greater than the value of the risk reduction.
VALID has applied ToR to tree risk but has removed the numberwang because:
1) Tree risk has too much uncertainty to credibly measure at single figure accuracy with risks like 1/4, 1/300, 1/20 000, or 1/500 000 000.
2) Risk outputs as probabilities create friction in communication because many people struggle with numbers. Research shows that about 25-33% can't rank 1:10, 1:1000, and 1:100 risks from highest to lowest.
3) The risk assessor and duty holder are spared the complexity of numerical cost-benefit analysis in the amber ALARP zone.
Recently, we caught a podcast where a tree was declared 'safe' if it's less than 30% hollow. We think they meant 70% hollow. Either way, this isn't right for several reasons.
We've posted about this before, but as long as this kind of mistake is being broadcast we think it's worth repeating so the message gradually gets home.
The heart of the confusion is the t/R = 0.3 fallacy. t/R = 0.3 is when a residual wall thickness (t) is 30% of the stem radius (R). It's often cited as a failure threshold. It's not. The Why t/R Ratios Aren't Very Helpful pdf explains why in detail.
In short, one reason is because of a geometric property called section modulus. Wind load and material properties remaining equal, if you double the diameter you increase the load bearing capacity of a tree by 8 times.
To add to the confusion, t/R 0.3 is often referred to as 70% hollow. In fact, a 0.3 t/R ratio is only 50% hollow. 70% is the radius, which is one dimension. t/R 0.3 is the area, which is two dimensions.
This graph from Paul Muir shows the relationship of central hollowing on:
A = Cross Sectional Area
Z = Section Modulus
t/R = 0.3
A = 49% loss of cross sectional area
Z = 24% reduction in load bearing capacity
To make matters worse. A tree with a t/R ratio of 0.3 can have a very high likelihood of failure, or it can have a very low likelihood of failure.
If all that wasn't enough, it's seldom that where decay is of concern we're dealing with a cross sectional area of a tree that's a circle.
"The implications of recent English legal judgments, inquest verdicts, and ash dieback disease for the defensibility of tree risk management regimes"
We've had several requests for a better quality image that's part of a discussion about this article on the UKTC (attachments on this group have to be below 180kb). Click the image to enlarge it.
You can download Jeremy Barrell's tree risk management article here.
Since then, we've had further calls to set out the points in this big canvas with a step-by-step guide to make it easy to follow.
We're genuinely surprised the article has been peer-reviewed, let alone published in a journal. It's not research. Some obvious key points of fact don't make much sense, even within the questionable logic of its own risk ecosystem. We've sketched them out in the above image so you can see the whole picture, and described them below. We're baffled how they weren't picked up during the peer review.
The matrix has High Risk, Low Risk, and Medium Risk outputs.
No Likelihood of Occupancy
So, we've got a Tree Risk Matrix
High × High = High Risk
High × Low = Medium Risk
Low × High = Medium Risk
Low × Low = Low Risk
High × Low × High = Low/Acceptable Risk
Somehow, we've gone from a Tree Risk Matrix world where:
High × Low = Medium Risk
High× Low = Medium Risk × High = Low/Acceptable Risk
And that's before we consider the really important stuff, like what does High, Medium, and Low actually mean, and how do you go about measuring them?
Unless clearly defined, words like High, Medium, and Low are what Philip Tetlock calls 'vague verbiage'. They're illusions of communication. Or tree risk 'bafflegab', as we call it. Further still, you can't reasonably model tree risk by applying mathematical rules to vague words and then multiplying (or adding) them, or by painting ill-defined words with traffic light colours.
Exploring the low occupancy = acceptable risk statement further.
As we don't know what a duty holder will think low occupancy means, and there's no guidance about what low occupancy means in the article, how do we know the risk is then low enough that it's acceptable no matter how high the likelihood of failure or how high the consequences?
That low occupancy has no clear definition or meaning in Jeremy's Tree Risk Management Frameworks should be worrying for a duty holder.
In VALID, low occupancy is clearly defined and there's no ambiguity. We don't burden the duty holder with trying to second guess what we mean by low occupancy. The reason why low occupancy = Acceptable Risk should be worrying for a duty holder following Jeremy's advice is that in VALID we have several scenarios where low occupancy has risks that are Not Acceptable or Not Tolerable.
Infrequent or very low use is a higher level of occupancy than low
To make matters worse. In Jeremy Barrell's 1:10,000 Time Bomb piece he describes this footpath (below) has having infrequent or very low use. He outlines that every year the path is walked by a person with a working knowledge of trees who gives them a quick visual check. Because these trees are being checked annually that means in Jeremy's tree risk management vocabulary, infrequent use or very low use is a HIGHER level of occupancy than low occupancy - remember, trees in low occupancy don't need checking at all.
Clearly, any duty holders following the guidance in Jeremy Barrell's Tree Risk Management Frameworks could quite reasonably classify the infrequent or very low use of this footpath as low occupancy and not check the trees.
This could be a substantial vulnerability for duty holders because in his 1:10,000 time bomb presentation, Jeremy makes a case for a claim being made against them if a small diameter deadwood branch from an Ash tree falls and causes significant head injuries to someone walking along this path. Even though he describes the risk as being at the lower end of his risk spectrum, the duty holder is expected to have removed the deadwood because it wouldn't have cost that much to do it.
These are just some of the more obvious concerns we have with Jeremy's take on tree risk management in his article.
There are some more insights into the critical problems with a binary take on High v Low Occupancy in the Jeremy Barrell | Tree Risk Management - Likelihood of Occupancy post.
Passive Assessment | The invisible gorilla in the room
There's a famous psychological experiment called the invisible gorilla. In it, you're asked to watch a short video of six people passing a basketball. Three of them are wearing white shirts and three of them black shirts. You're asked to count how many passes are made by the white shirts. Most people get the number of passes right. Because they're focused on this, what half the people don't see is a gorilla walk amongst the players, stop, face the camera, thump their chest, and walk off.
To half the people, this very obvious gorilla is invisible.
I recently found one of my invisible gorillas. Whilst putting a flowchart together for VALID's Tree Risk-Benefit Management Strategies, I realised my invisible gorilla was Passive Assessment.
Passive Assessment, and not Active Assessment, is a duty holder's most valuable tree risk-benefit management asset because;
This tree risk assessment review article by Peter Gray, from the Summer 2020 issue of Arboriculture Australia's 'The Bark', might be of interest to you.
1) The 'mathematics professor' and the risk model
The 'mathematics professor' isn't a mathematics professor. His name's Willy Aspinall and he's the Cabot Professor in Natural Hazards & Risk Science at the University of Bristol. He's a 'risk professor' who we worked with when developing VALID's risk model that does the hard work behind the scenes in the App. He's driven the model to breaking point and this is what he has to say about it:
“We have stress tested VALID and didn’t find any gross, critical sensitivities. In short, the mathematical basis of your approach is sufficiently robust and dependable for any practical purpose.”
2) Risk overvaluation? - Death by numberwang
"The risk of harm* for incidents involving motor vehicles (not motor cycles) appears to be high. There is little evidence of people being killed from cars running into fallen trees but this still apparently has a significant input to the calculated risk of harm."
This gets a bit detailed. In short, the risk isn't too high in VALID's risk model when it comes to vehicles.
There's a couple of points here. First, VALID's risk model doesn't try to measure a death. Death is too narrow and accurate a consequence for a risk that has this much uncertainty. Here's the long answer, and we shared a version of this with Peter after reading his thoughtful article.
The essence of the conundrum with traffic is that it's seldom that a death occurs unless a tree or a large branch hits the cab space of a moving vehicle. So, how do you go about modelling the consequences when they're usually a vehicle driving into a tree during its stopping distance, rather than tree part hitting the relatively small cab space?
The answer's quite complicated because it's a combination of risk modelling from published traffic accident data, the Abbreviated Injury Scale, Occam's Razor (have the least assumptions), running what's called sensitivity analysis, and ease of use.
Perhaps most importantly, 'a difference is only a difference if it makes a difference'. We don't have the data to confirm this because it doesn't yet exist, but we suspect VALID's risk model is over-valuing the consequences in some parts of traffic because of the safety measures that cars have to protect the occupants during accidents; the model's erring on the side of caution for consequences. However, does this matter? Does it make a difference to the actual risk output?
To test this, in the model, we can drop the consequences one or two orders of magnitude in each scenario, with sensitivity analysis, and see how much a difference that makes to the risk. Doing this, it's clear red risks aren't turning green. That means the duty holder is still going to do something to reduce the risk even if the consequences are overvalued.
Cranking up the numberwang to try and model trees and traffic more accurately is not only fraught with mathematical problems and increased uncertainty, but it's not going to make a difference to the decision-making of the assessor or duty holder. It would also add another layer of complexity to the risk model when it comes to decision-making where you combine traffic and people in high-use occupancy. So why try to do it?
An analogy we've used is that we're looking at a big risk picture, and in one part of the canvas that's dealing with traffic and consequences it's a bit blurry. We could get down on our hands and knees with a magnifying glass and spend ages trying to make that consequences part a bit sharper. But when we've finished and taken a step back to admire our work, the risk picture isn't noticeably different.
We've grown up being told when we assess tree risk we should look out for tree 'defects'. The problem with this approach is what are commonly labelled as defects often aren't defects at all. Hollows, cavities, decay colonies, and deadwood, are natural features of older trees that are usually valuable habitat benefits. It’s seldom these natural features are risks that are not Acceptable or Tolerable. So, why are we labelling them defects before we carry out a risk assessment?
Those of you who know the origin story of VALID might remember the D-word dilemma. Vitality, Anatomy, Load, Identity are all neutral. On the other hand, because Defect means something that's a shortcoming, an imperfection, or a flaw it's not neutral. Defect is pejorative.
Defect is also a begging the question decision-making problem because usually, you can only work out whether a feature is a defect after you've evaluated it and the risk, not before.
Last year, the word ‘defect’ was removed from all of VALID’s Tree Risk-Benefit Management Strategies. Obvious Tree Defects was replaced by Obvious Tree Risk Features. Now, Defect is finally going to be removed as the D-word in VALID.
Occasionally, you'll come across an Arborist who claims to have anecdotal evidence about tree risk which they insist is the truth of the matter.
We've not given anecdotal evidence much weight. Not since we met a bloke down the pub who told us, it's not worth the paper it's not written on.
That said, having just read this article and its compelling evidence, perhaps we should update our priors.
Can we rely on an expert witness telling us what the courts are expecting when it comes to tree risk?
In short, the answer is no because they’re an expert to the court, and not an expert for the court.
A competent arboricultural expert witness knows their limitations. Namely, their role is limited to being an expert to the court. They’re stepping way outside their field of expertise if they claim a Judge’s wisdom about how the law will evaluate tree risk-related evidence in the next case.
Claims by Arborists that they’re experts for the court should ring alarm bells. In a similar way to a Judge who, with no arboricultural training or qualifications, claims they could carry out an advanced tree risk assessment with a Static Load Test on a tree that has extensive root decay because they’ve seen it done.
In the UK, we’ve had several tree risk-related Judgments where the Judge has spotted an expert straying outside of their lane, and dipping into their legal dressing up box. Most recently in Colar v Highways England, the Judge spends a quite remarkable amount of the Judgment tearing strips off the defendant's expert. This commentary, by Gordan Exall, in 'Civil Litigation Brief' covers the issues in detail.
Perhaps, what’s of much greater concern is when a Judge is not aware that the evidence an expert gives them is critically short on expertise. Highly questionable expert evidence appears to have been pivotal in two landmark Judgments in the UK, Poll v Bartholomew (2006) and Cavanagh v Witley Parish Council (2017).
This article explores the gulf between reasonable, proportionate, and reasonably practicable tree risk assessment and management, and expert evidence in these cases.
Recently, we had a couple of enquiries asking for a copy of this article. It reviews qualitative and quantitative approaches to tree risk assessment and looks at how we could do better.
It's over two years old now and was written at the time VALID was entering the home stretch. Though VALID has evolved further, much of the article is still relevant today.
This makes for an interesting tree risk assessment case study.
An ISA TRAQ, QTRA, and VALID tree risk assessment were carried out on the same Pine trees in Western Springs, Auckland | NZ.
It involves around 200 Pinus radiata. From a risk of branch or tree failure perspective, the trees of particular interest are those that could fall onto a footpath or property.
The reports are linked.
ISA TRAQ | August 2019
QTRA | December 2019
Random tree part or tree onto footpath
1/400,000 (Size Range 4)
1/500,000 (Size Range 3)
1/1,000,000 (Size Range 2)
<1/1,000,000 (Size Range1)
1 Not Acceptable
50 Not Tolerable
You can keep up with all latest Tree Risk News by subscribing.
You'll get an occasional short dispatch from the front line, so you can stay on top of what's important.
You'll also be first to know when new events are coming up.
What is VALID?
Stay Let's stay in touch
© VALID is a not-for-profit organisation