December 1, 2009

# Climate Conspiracy Appendix A

**Appendix A:**A scientific discussion of the DCPS paper.

[Appendix B here]

As brief background we state here what DCPS addressed and how our analysis was designed to answer the question posed. The basic question we addressed dealt with the tropical

As brief background we state here what DCPS addressed and how our analysis was designed to answer the question posed. The basic question we addressed dealt with the tropical

*relationship*between the surface and the upper air temperature trends. We posed this question: IF the observations and the models had the same surface temperature trend, do observations and models have the same upper air temperature trends? In other words, do models and observations show the same surface to upper air*relationship*? The answer, as we demonstrated, was no - significantly no.The conditional "if" is critical here. Without a common surface trend between models and observations, it would be inappropriate to compare their upper air trends, because it is the

*relationship*between surface and upper air trends with which we were concerned. In other words, since we were comparing the absolute magnitudes of trends in the upper air (between models and observations), the absolute magnitude of the surface trends needed to be the same before something could be said about the relationship of the two. As it turns out, there is a fairly fixed relationship between the surface and upper air temperatures, so the "if" is critical.Very few model surface trends were near the observational value, but fortunately, the

*average*of the 22 models did indeed produce a surface trend very near that which was observed. Given this result, we calculated the mean of the 22 models' temperature trends for all levels for the comparison study. In essence, the "average" was a way to normalize the models to be comparable with observations to deal with the question we addressed. It also had the advantage of being analogous to the IPCC "best estimate" methodology. We showed that when the different observational datasets were compared with the models' upper air average trends, there were significant differences.One of the key points of contention with S08 is the magnitude of the error bars on the model output (Santer et al. or S08, Fig. 6). DCPS utilized the standard error of the mean (i.e. the standard deviation of the model trends divided by the square root of the number of models in the sample minus 1.) In other words, once the surface trend is set in a model, there is very high confidence (small error bars) of what the upper air trends are. Thus, a model with a surface trend of, say +0.13 °C/decade, will produce upper air trends for each layer above within a very tight range because of the consistency of the surface vs. upper air relationship.

Given the conditional requirement and our method of error calculation, even S08 confirmed our result (so our arithmetic was fine). However, they strongly objected to the narrowness of our error bars. Their view was to allow models to have a very wide range of possibilities of trends (roughly the range from the coolest model to the warmest) no matter what their associated surface trends might be.

Another way to think of this is that S08 compared observational upper air trends against upper air trends of the entire spread of model results which themselves were associated with a wide range of surface trends. The models' surface trends ranged from +0.03 to +0.31 °C/decade, while the three observational surface datasets all showed values very close to +0.125 °C/decade. Why would we want to compare the upper air trends from models, that were associated with surface trends as little as +0.03 or as high as +0.31 °C/decade, with observations associated with a surface trend of +0.125 C/decade? This would be comparing apples to oranges. This is why we claim S08 have set out a false premise and should not have been published without at least the opportunity for direct response to show their fundamental misunderstanding of DCPS. While the models had a range of trends that might be interpreted as a range associated with natural variability, the key here is that the

*relationship*between surface and upper air in the models has very little variability over multi-decadal time periods. To DCPS, using such a wide range of model trends invalidated the intent of the basic question we posed as it ignored the fundamental condition "IF the models and observations had the same surface trends ..."**Appendix A:**A scientific discussion of the DCPS paper.

[Appendix B here]

As brief background we state here what DCPS addressed and how our analysis was designed to answer the question posed. The basic question we addressed dealt with the tropical

As brief background we state here what DCPS addressed and how our analysis was designed to answer the question posed. The basic question we addressed dealt with the tropical

*relationship*between the surface and the upper air temperature trends. We posed this question: IF the observations and the models had the same surface temperature trend, do observations and models have the same upper air temperature trends? In other words, do models and observations show the same surface to upper air*relationship*? The answer, as we demonstrated, was no - significantly no.The conditional "if" is critical here. Without a common surface trend between models and observations, it would be inappropriate to compare their upper air trends, because it is the

*relationship*between surface and upper air trends with which we were concerned. In other words, since we were comparing the absolute magnitudes of trends in the upper air (between models and observations), the absolute magnitude of the surface trends needed to be the same before something could be said about the relationship of the two. As it turns out, there is a fairly fixed relationship between the surface and upper air temperatures, so the "if" is critical.Very few model surface trends were near the observational value, but fortunately, the

*average*of the 22 models did indeed produce a surface trend very near that which was observed. Given this result, we calculated the mean of the 22 models' temperature trends for all levels for the comparison study. In essence, the "average" was a way to normalize the models to be comparable with observations to deal with the question we addressed. It also had the advantage of being analogous to the IPCC "best estimate" methodology. We showed that when the different observational datasets were compared with the models' upper air average trends, there were significant differences.One of the key points of contention with S08 is the magnitude of the error bars on the model output (Santer et al. or S08, Fig. 6). DCPS utilized the standard error of the mean (i.e. the standard deviation of the model trends divided by the square root of the number of models in the sample minus 1.) In other words, once the surface trend is set in a model, there is very high confidence (small error bars) of what the upper air trends are. Thus, a model with a surface trend of, say +0.13 °C/decade, will produce upper air trends for each layer above within a very tight range because of the consistency of the surface vs. upper air relationship.

Given the conditional requirement and our method of error calculation, even S08 confirmed our result (so our arithmetic was fine). However, they strongly objected to the narrowness of our error bars. Their view was to allow models to have a very wide range of possibilities of trends (roughly the range from the coolest model to the warmest) no matter what their associated surface trends might be.

Another way to think of this is that S08 compared observational upper air trends against upper air trends of the entire spread of model results which themselves were associated with a wide range of surface trends. The models' surface trends ranged from +0.03 to +0.31 °C/decade, while the three observational surface datasets all showed values very close to +0.125 °C/decade. Why would we want to compare the upper air trends from models, that were associated with surface trends as little as +0.03 or as high as +0.31 °C/decade, with observations associated with a surface trend of +0.125 C/decade? This would be comparing apples to oranges. This is why we claim S08 have set out a false premise and should not have been published without at least the opportunity for direct response to show their fundamental misunderstanding of DCPS. While the models had a range of trends that might be interpreted as a range associated with natural variability, the key here is that the

*relationship*between surface and upper air in the models has very little variability over multi-decadal time periods. To DCPS, using such a wide range of model trends invalidated the intent of the basic question we posed as it ignored the fundamental condition "IF the models and observations had the same surface trends ..."