Sunday, March 6, 2011

Air Quality in the Barnett Shale - Part 24: How confident in that 10.8 ppbv are you?

It has taken me a bit of time to get through the ERG "Interim Ambient Air Monitoring" Report as well as to re-read the Ft. Worth League of Neighborhoods (FWLN) "Recommendations for policy Changes for Gas Drilling Near Schools" report.

There is a lot of data to take in as well as looking at the methodology and conclusions.  I also try not to take things at face value and check the sources of the information (which is why I cite and link my sources).

So here is the issue;  Holy lots-o-dots Batman!  There's a bat-boat load of oil & gas related activity in the Ft. Worth area! (1)


So it is only natural that the public in and around these sites would question if their health and safety is being impacted.  And, when it comes to their children, as in "ensuring the safety of the 80,000 children who attend FWISD schools (2)" the need to be assured becomes elevated.  So what to do...what to do.

I am critical of bad science because it can cause people to do things that are more detrimental to their health then the thing they they were trying to avoid.  The issue in all of this should not be carbon disulfide.  Carbon disulfide is a non-issue and it is dominating the discussion and taking away from a real dialog which should be; What can we do to make oil & gas activity in and around homes and schools safer and reduce its impact on the environment?

The discussion should be; You want to extract here, then you need to adopt an "Environmentally Systems Drilling Friendly Program" I spoke about in a previous post.  Instead we have a "team of scientists and experts – Dr. Ramon Alvarez, Dr. Melanie Sattler, Dr. DavidSterling, and Carl Weimer – who donated their expertise and time to the League to produce this report (2)" describing a situation of concern that does not exist.

Whether a setback is a prudent idea is not what I am being critical of.  The FWLN report is basing this "one mile setback" on an air dispersion model's finding that carbon disulfide could be found at concentrations over three times the OSHA permissible exposure level (PEL) one mile from an O&G facility. (241 mg/m3 = 78 ppm)


Now reading this, and knowing that a "team of scientists and experts" - two of them Texas University professors - signed off on the findings, one might reasonably conclude; Holy exposure Batman!  The children!

But a closer look at how that number was derived would lead you to a different conclusion; Holy poor science Batman!  That data for carbon disulfide is invalid!

I have tried to present as objective and scientifically based argument as to why this is so in over 24 posts on this topic.  If I have not made my case then either you don't believe me, don't want to believe me, don't want to consider it, or have some other reason to ignore my conclusion as to why the methodology and findings in the Town of Dish Texas and the FWLN reports are wrong.

These blogs are open to anyone who wants to show where my premise, conclusion, or understanding of the science is incorrect, faulty, or wrong.  If I can dish it out (no pun intended) then I sure as better be able to take it.

So it comes down to this.  The FWLN report includes laboratory analysis for carbon disulfide for one single sample:


With the carbon disulfide concentration as:


So the air dispersion model generating Plot 1:


...was based on the analytical report shown above:


And if for no other reason than that - the results of the model and the conclusion for Plot 1 is invalid.  One sample cannot be used to test the null hypothesis that there is a relationship between the ambient air concentration at one location (SUMMA canister for 300 McNaughton Ln.)  and the expected ambient air concentration at another point (one mile out in Plot 1).

One sample is not representative of the air at that location.  Here is what EPA has to say about samples:
Representative: “a sample of a universe or whole (e.g., waste pile, lagoon, ground water) which can be expected to exhibit the average properties of the universe or whole."
Inferences about the population are made from samples selected from the population. For example, the sample mean (or average) is a consistent estimator of the population mean. In general, estimates made from samples tend to more closely approximate the true population parameter as the number of samples increases. The precision of these inferences depends on the theoretical sampling distribution of the statistic that would occur if the sampling process were repeated over and over using the same sampling design and number of samples.
This then leads to:
[a]fter a sample of a certain size, shape, and orientation is obtained in the field (as the primary sample), it is handled, transported, and prepared for analysis. At each stage, changes can occur in the sample (such as the gain or loss of constituents, changes in the particle size distribution, etc.). These changes accumulate as errors throughout the sampling process such that measurements made on relatively small analytical samples (often less than 1 gram) may no longer “represent” the population of interest.  Because sampling and analysis results may be relied upon to make decisions about a waste or media, it is important to understand the sources of the errors introduced at each stage of sampling samples and take steps to minimize or control those errors. In doing so, samples will be sufficiently “representative” of the population from which they are obtained.
When scientists make statements regrading their observations, the concept of precision and bias come into play:
  • Precision is a measurement of the closeness of agreement between repeated measurements. 
  • Bias is the systematic or consistent over- or underestimation of the true value
Precision is the ability to get the same - or very close - result each and every time you collect or analyze the sample.  Bias, on the other hand, results from a number of problems inherent in sampling and analysis. 
Sampling Bias: 
  • Bias can be introduced in the field and the laboratory through the improper selection and use of devices for sampling and subsampling. Bias related to sampling tools can be minimized by ensuring all of the material of interest for the study is accessible by the sampling tool.
  • Bias can be introduced through improper design of the sampling plan. Improper sampling design can cause parts of the population of interest to be over- or under-sampled, thereby causing the estimated values to be systematically shifted away from the true values. Bias related to sampling design can be minimized by ensuring the sampling protocol is impartial so there is an equal chance for each part of the waste to be included in the sample over both the spatial and temporal boundaries defined for the study. 
  • Bias can be introduced in sampling due to the loss or addition of contaminants during sampling and sample handling. This bias can be controlled using sampling devices made of materials that do not sorb or leach constituents of concern, and by use of careful decontamination and sample handling procedures. For example, agitation or homogenization of samples can cause a loss of volatile constituents, thereby indicating a concentration of volatiles lower than the true value. Proper decontamination of sampling equipment between sample locations or the use of disposable devices, and the use of appropriate sample containers and 
Analytical Bias:
  • Analytical (or measurement) bias is a systematic error caused by instrument contamination, calibration drift, or by numerous other causes, such as extraction inefficiency by the solvent, matrix effect, and losses during shipping and handling.
Statistical Bias:
  • When the assumptions made about the sampling distribution are not consistent with the underlying population distribution, or
  • When the statistical estimator itself is biased.

Because bias is always in play, the number of samples collected (replicates) and the dates/areas (representative) is increased to minimize the impact of these issues.

So how confident in the number 10.8 ppbv is the report's "team of scientists and experts?" How sure are they that 10.8 ppbv represents the true value of the air at that sample point?  I mean. look at all the potential errors that could have impacted it.  Shouldn't at least a duplicate sample have also been collected and analyzed?

It was that one single value that was used to determine the concentration of 78 ppm determined by the model to be in the plume:
Plume extends 1 mile from the source in this graphic. Full extent of plume was in excess of 2 miles. Plot 1 multiples were up to 1000 times the short term health benchmark for carbon disulfide.
How confident would (could) any reputable scientist be if one - and only one - sample - was used in their published research?

For that reason alone, Plot 1 is not valid.

Now, couple that with this:


 Notice that "N" in the very last column?  That's a note from the laboratory:


What the lab is saying is that all the statistical stuff they do - precision & accuracy - was not performed on this sample.  How confident are they in the number 10.8 ppbv?

So there you have it.  Plot 1 was developed using one sample and analyzed on an instrument that was not calibrated for carbon disulfide.

Holy bias Batman!  Plot 1 values are not statistically valid!


Yes Robin, that's what I have been trying to say all along.  The whole enchilada is an example of bad methodology, sampling, and analysis - and the results and conclusion are invalid.


Next post: Air Quality in the Barnett Shale - Part 25: What a statistically valid report looks like.


.

No comments:

Post a Comment