Visitors Now:
Total Visits:
Total Stories:
Profile image
Story Views

Now:
Last Hour:
Last 24 Hours:
Total:

Mike Haseler: The importance of supra-national institutions for nurturing paradigm shifts in scientific development

Wednesday, October 24, 2012 13:00
% of readers think this story is Fact. Add your two cents.

(Before It's News)

My thanks to Mike Haseler, head of the Scottish Climate & Energy Forum, and co-participant at the Royal society workshop on Handling Problems With Uncertainty in Weather and Climate Prediction which we attended earlier this month. he has written this paper in response to the event, which he has kindly given permission for publication here.

Climate changes: The importance of supra-national institutions in nurturing the paradigm shifts of scientific development.
BY MIKE HASELER BSC MBA

 Scottish Climate & Energy Forum, 7 Poplar Drive, Lenzie, UK

Those present at the Royal Society meeting in October 2012 were left in little doubt about the importance of climate and weather prediction and its power to save lives. Whilst numerical modelling provides this invaluable information on daily to seasonal/regional forecasts, this meeting revealed a new paradigm emerging regarding longer term forecasts. This paper shows the learning curve suggests current methods could take 24,000 years to reach the maturity needed to be the basis for public policy. We examine whether problems of communication of probabilistic forecasts may indicate a lack of a “mental model” or shared understanding in numerical modelling and that more scientific structure may both improve communication and utility of weather and climate projections. Although climate is uncertain and numerical predictions immature, there is high confidence that climate will continue to vary, that this will have profound impacts and that e.g. doubling CO2 is likely to add to the natural variation. So, the message to policy makers as the Kyoto Commitment comes to an end, should be that whatever the cause of climate changes, we should continue to fund lifesaving climate research.

Introduction

NULLIUS IN VERBA  (take nobody’s word for it) is the motto of arguably the world’s most prestigious Scientific institute: The Royal Society. The motto, being in Latin, not only symbolises continuity from the past, but also change. Scientific authority comes not from words but from the evidence and subject to new evidence scientific thinking must change. The Society has been the father to many scientific changes from the microscope to Newton it has looked on with fatherly oversight as many generations of those involved in the day to day aspects of scientific development have metaphorically learned to crawl, then walk, then run.

So, it should be no surprise that the Royal Society has been instrumental in hosting this key inter-disciplinary outreach meeting which marks a recognition of a new paradigm. Whilst those present were left in little doubt about the importance of climate and weather prediction and its power to save lives from flood forecasts in Bangladesh (Webster 2012) to Seasonal (Cornforth 2012) & malaria weather forecasts in Africa (Morse 2012) to daily traffic conditions in the UK (Mylne 2012), perhaps more was learnt from what was not said. No one strongly objected when American Professor Judith Curry (2012) articulated what appears to be the new consensus amongst climate experts: that whilst short term weather/climate models are providing important life-saving information, longer term climate models still left a lot to be desired and were probably not presently fit for purpose as a tool for detailed policy making. Instead a new paradigm is emerging: that whilst the future & climate is uncertain, we are certain climate changes. So whilst we can rely less on one projection of long term climate change we must instead prepare for a range of possible scenarios.

Fig 1: Variability of observed global mean temperature as a function of time-scale (°C2 yr–1)
from figure 9.7 IPCC (2007)

Butterfly Effect

There can be few climatic concepts so pertinent to the policy arena as the well known: “butterfly effect”. The phrase refers to the idea that a butterfly’s wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in another location.

The effect was described by Lorenz (1963) when, like modern climate and weather models, he was running a numerical computer model and entered the decimal .506 instead of entering the full .506127. The result was a completely different weather scenario.

Because the butterfly effect is so prevalent in climate there can be dramatically different forecasts for relative small changes in initial assumptions. So to test the range of possible outcomes and the likelihood of each, the modern practice is to rerun the models using an ensemble of different assumptions. But these no longer give “a forecast”, instead there are a range of forecasts and scenarios. This technique has proven invaluable in improving short-range forecasts which can be run repeatedly day after day so that they are now felt to encompass most of the short-term atmospheric variability. The result is that daily forecasts are now so good that they pass the “Palmer test” a test suggested by Prof Palmer in reference to the Turing test (1950 See Appendix B) whereby the description given by a weather forecasts becomes so complete and full of the features that they are indiscernible by the observer from a similar scale plot of the actual weather that occurs.

However, as Professor Tim Palmer FRS (2012) organiser of the Royal Society highlighted, Lorenz was also interested in the way different events occur over different time-scales

“It is found that each scale of motion possesses an intrinsic finite range of predictability, provided that the total energy of the system does not fall off too rapidly with decreasing wave length. With the chosen values of the constants, “cumulus-scale” motions can be predicted about one hour, “synoptic-scale” motions a few days, and the largest scales a few weeks in advance.” Lorenz (1969)

These different scales of forecasts, from the day-to-day forecasts of the daily Meteorological office forecast that the public know so well to the week-to-week and relatively recent month-to-month forecast needed by industry and government all share common physical laws, but those laws result in vastly different scales of uncertainty and therefore predictability.
Scales of Variability

As fig 1 shows, global mean temperature variation in the instrument record increases rapidly when longer periods are considered approximately as follows:

Fig 2: Schematic learning curve showing how the error decrease with more experience as trial size increases. P∞ = 0.3

This shows global climate has vastly different scales of activity.  For any change seen over periods of one decade, much greater scale changes are expected over longer periods of centuries but much less change is expected over the year to year scale. This strongly suggests that whilst the same physical laws apply, as Lorenz implies, their effect is very different when considering different time-scales.

Whilst the causation of this change in scale is not discernible from fig 1, it does suggest an upper limit to natural variation which over the climate forecasting time-scale shows a strong increasing upper limit as we approach the century to century scale of forecast and strongly decreasing effect on shorter range forecasts. Lorenz makes clear that whilst the same laws apply “each scale of motion” possesses an intrinsic finite range of “predictability” and the knowledge and skill drawn from day-to-day or even month-to-month month are likely to reflect entirely difference physical manifestations of those law than those present at the decade-to-decade and century-to-century scale. So climate models much be based on appropriate scale data. Month-to-month on month-to-month changes. Year-to-year on year-to-year changes, decade-to-decade on decade-to-decade changes and century-to-century on century-to-century changes.

The Learning Curve
Several authors (Guyon 1997, Cortes et al 1993) have proposed and justified theoretical and experimental learning curves of the form:

where l is the number of training examples  (λ & h can be determined experimentally by curve fitting). This gives rise to a curve similar to that shown in fig 2 showing an initial phase of high errors with rapid improvement when few trials have taken place, followed by periods of reducing errors but where progress is less easy. Often there is a limit to improvement due to some constraint, which may fixed, but may be bypassed by a change methodology or improvement as technological changes (such as the development of computers) which allow fundamental shifts in the learning process resulting in a new learning curve with a new lower long-term constraint.

The present state of forecasting
Whilst there was no actual statement of the current state, table 1 broadly encompasses the various statements made by various speakers. Based on table 1 and what we know of the learning curve (that it takes a similar  number of trials to raise the performance of a similar forecast to the same level) it is possible to make a prediction regarding the likely time to reach a particular skill level. Numerical forecasting has been in use since the 1970s (Lynch 2008) however, even if we use the much shorter time since ensemble forecasting began being used around 1990 (Molteni et al 1996) we still find that the time to reach a particular skill level as shown on the left hand axis of fig 3 is exceedingly long.

Fig 3: Lines relating the time horizon of the forecast (bottom axis) with the total time to reach various levels of skill (vertical).

Based on this graph, it will take till 2235AD for the yearly forecast to be as good as the current monthly forecast. It will take 2400 years for the decadal forecast and 24,000 years for the century forecast to be as good as today’s monthly forecast. One is tempted to suggest it would be easier to invent time travel than provide an accurate forecast for the next century – but even time travel would have its own learning curve!

It is important to emphasise that the main reason making this assertion is precisely the same reason some suggest that long-term forecasts can be accurate: that they use the same basic methodology and general laws of physics but simply apply them at a different scale. In reality our understanding of the nature of learning curve, strongly suggests that far from the long term climate models being reliable,  if we approach them in the same way as short term forecasts, we are almost certain that we use them with any confidence within our lifetimes. Indeed, it is only if we fundamentally change our approach that we have any reason to suppose that we can do better than the limit implied by the learning curve.

Table 1: The present state of forecasting

The prediction is that it is probably probability

One key message that came out from the meeting at the Royal Institution was a concern about “communication”. There were several examples of researchers having problems communicating the benefit of  probability forecasts to with “middle managers”. Many others spoke about communicating probabilistic weather forecasts with the public and though not addressed in the meeting, there have been wider calls to improve the communication of climate research with the public. (Bett 2012, Watts 2012, DECC 2010)

Several presenters suggested that the problem lay with the public or “middle managers” who they felt did not understand probability. But as Liz Stephens (2012) ably demonstrated in her presentation on the BBC “weather game”, the public were very capable of using & understanding probabilities. However whilst the public understand probability, some research suggests that one problem may be probability can be an ill defined concept:

we randomly surveyed pedestrians in five metropolises located in countries that have had different degrees of exposure to probabilistic forecasts––Amsterdam, Athens, Berlin, Milan, and New York. They were asked what a “30% chance of rain tomorrow” means both in a multiple-choice and a free-response format. Only in New York did a majority of them supply the standard meteorological interpretation, namely, that when the weather conditions are like today, in 3 out of 10 cases there will be (at least a trace of) rain the next day. In each of the European cities, this alternative was judged as the least appropriate. The preferred interpretation in Europe was that it will rain tomorrow “30% of the time,” followed by “in 30% of the area.”  (Gigerenzer et al 2005)

Similar problems are found in other areas like medicine in communicating probability. More information is not always thought helpful, but medical research shows that where medical patients have more knowledge they are more satisfied with decisions (Holmes-Rovner et al 1996, Whelan et al 2003). But presenting more information does not necessarily result in the patients having knowledge (Beardsley et al 2007), and likewise, presenting probabilistic information (as above) where its meaning is not understood, does not necessarily result in more knowledge of the actual probability. Those wishing to present probabilistic forecasts should also be aware of the finding of Politi et al (2011) that communicating uncertainty can lead to less decision satisfaction amongst patients. Perhaps it is the nature of probability forecasts that are problematic?

NASA research shows that communication and coordination are a critical factor in air plane safety and highlight the need for crew members to share a common “mental model”:

NASA researchers analyzed the causes of jet transport accidents and incidents between 1968 and 1976 (Cooper, White, & Lauber, 1980; Murphy, 1980 as cited in Cooper et al.) and concluded that pilot error was more likely to reflect failures in team communication and coordination than deficiencies in technical proficiency. In fact, human factors issues related to interpersonal communication have been implicated in approximately 70% to 80% of all accidents over the past 20 years. Correspondingly, over 70% of the first 28,000 reports made to NASA’s Aviation Safety Reporting System (which allows pilots to confidentially report aviation incidents) were found to be related to communication problems (Connell, 1995). Communication is critical in order for cockpit crewmembers to share a “mental model,” or common understanding of the nature of events relevant to the safety and efficiency of the flight. (Sexton et 2003)

These findings support the theory that high crew performance results when captains use language to build shared mental models for problem situations. Orasanu  (1991) & Gaba et al (1995) also highlight the need for shared mental models in the operating theatre (but the situation was complicated as flight-crew have similar training whereas operating theatres have three distinct groups: surgeons, anaesthetists and nurses). Stanton et al (2001) refer more generally to “situational awareness” than mental models, but again relate this to improved performance in safety-critical areas. (see also Cook et al 2007)

Others (Eom et al 2006, Swan 2001) have found that module structure is important in satisfaction with online courses. So, structure, whether it is described as a shared mental model or situational awareness is key not only to understanding but satisfaction. So, when weather forecasters present information without the public sharing this mental model is this why it can appear like a black art?

Fig 4: Old style forecast showing isobars and fronts
presented by Michael Fish

Fig 5: Modern forecast map. Diagonal line of cloud at bottom
left is front is an unmarked front.

Although weather information is indispensable for a number of economic players in the public, there is a widespread misconception (often involuntarily spread by mass media) that meteorologists are like magicians making prophecies. It is not by chance that many TV shows and national or international papers present weather forecasts next to horoscopes (Raimondi 2009)

So, communication may be the issue, but perhaps the route cause is that the use of ensemble weather forecasts, whilst vastly improving the predictions, result in outcomes which are inherently probabilistic, lack deterministic structure and can appear “irrational”. In other words, the lack of deterministic structure in the model makes it difficult to present the material in a simple conceptual framework allowing a shared mental model which appears necessary to enable ideas to be easily and efficiently transferred. One example of such a conceptual framework is that of the weather system represented by isobars and fronts (fig 4). It does not indicate the intensity of rainfall nor temperature, but the model enables some simple “rule of thumb” projections by the user indicating the likely progress of the weather over time and the change in wind, temperature and likely occurrence of rain.

With ensemble forecasting, the detail of features has greatly improved (fig 5), but at the expense of the “mental model”. And because numerical models focus on detail without an overall mental model of behaviour, it is not easy for the average user to understand the “flow” of the weather without considerable effort “interpolating” all the mass of detail between the views. So, the viewer is reliant on the broadcaster running forward each and every projection of interest, temperature, wind, fog, rain, something which is not possible in a limited time. The result can be dissatisfaction with what is a highly accurate forecast which fails to be useful.

From the personal experience of the author it has often proven impossible to to assess the timing of widely spread events in various geographic locations as is needed when planning when to make a long journey. The only way to simplify the map in order to make the forecast meaningful was to create a mental model of fronts but for a while (at least in the BBC) it was left up to the viewer to post-construct and interpret the structure of fronts from the weather map. (However, please note: it is also known that some people prefer information to be presented as a narrative, others symbolically, some want simple information, others like to understand the complexities)

Towards a New Paradigm

Man that is in honour, and understandeth not, is like the beasts that perish. (Psalm 49:20)

Just as weather changes, so climate has always changed and indeed so has science and scientific thinking. None of these are set in stone. This fact was often forgotten as the focus of climate modelling concentrated on producing “THE consensus” prediction of THE GLOBAL future. Even simple analysis shows such predictions were immature and not appropriate as a basis for public policy. Even if the scale of human effects is significant it is not out of proportion to recorded historical changes through which the overall world population has continued to grow.

Fig 6: New 2007 Various ensemble forecasts showing the huge range of predicted Thames Basin flow compared to the single
projection used by UK government. (New et al 2007)

In contrast, the climate record is full of regional catastrophes. For example, in the 1690s up to a quarter of the population of Scotland died from cold (Cullen 2010). This occurred in a period known as the Maunder Minimum during a longer period known as the little ice-age. Whether or not the Maunder minimum or little ice-age directly contributed to this  climate catastrophe is part of the overall climatic framework within which a series of exceptional events in the 1690s was catastrophic for the people of Scotland. Arguably it contributed significantly to political changes that led to the end of Scottish independence. Recent climate research seems to have forgotten that cold as well as heat is a real threat.  Such regional and exceptional climatic events continue to be a real threat today particularly in the developing world. Even in the UK there is evidence weather/climate is a killer with around 25000  extra deaths in the winter months(Age Concern 2010a) with 8000 more for each 1°C colder the average winter temperature (Age Concern 2010b). This shows how climate is still a killer even in the most developed nations. The scale of these deaths is dissimilar to that reported by Dr Andy Morse (2012). In India from exceptional heat events and indeed even in India there are also many deaths from cold for which comparable research has not been completed. So, whether cold or heat, weather and climate are intrinsically important to human health. As such, climate impacts research deserves funding irrespective of whether the cause is known to be natural or man-made. Arguably that the only group that has significantly benefited from “man-made” climate change has been the commercial interests in renewables whose lobbying has diverted huge public subsidies to themselves. This has enriched a few in the developed nations at the expense of both the developed nations poor (who pay disproportionately for energy) and the developed world. They have suffered as the focus has been diverted from life saving work tackling healthcare problems from climate catastrophe whatever the cause.
The present “regime” is focussed on a single “consensus” global figure, but as fig 6 shows, such thinking, whilst giving the “most likely” or perhaps “most anticipated” outcome, can wholly understates the vast range of possible scenarios. In the case of projections for Thames River water flow shown in fig 6 there is a real possibility of large increases as well as decreases much greater than the “consensus” figure would suggest. The result is that impacts assessments tend to look at only one scenario: one side of the coin. As such they ignore the full range of scenarios which show that both increasing and decreasing values are possible in almost all climate values over the time-scale of centuries. Indeed, given information as shown in fig 6, all that can be said with much certainty is that the “consensus” scenario is very unlikely to be the one that actually occurs.

Back to Science

Numerical modelling is used in economics, politics, marketing etc. So whilst it is a useful tool for scientists, it is not science (Chiara 1996 p.217). Numerical modelling is not a replacement for the hundreds of years of learning embodied in institutions like the Royal Society.

Fig 7: Share price of four of the main wind manufacturing companies. Vertical scale of share prices was adjusted by eye to to
provide comparison of performance 2010-2013. Vertical scale has no offset.

There is strong scientific basis to suggest that CO2  is a greenhouse warming gas and that  doubling the level of CO2 in the atmosphere will lead to greenhouse warming of around 1°C (Curry 2010, Rahmstorf 2008) with others suggesting a range of 0.62°C (Harde 2011) to 1.2°C (Bony et al 2006). But, even if the exact figure is still uncertain, we can be confident in this warming because it is based on verified empirical measurements based on hundreds of years of scientific knowledge. Why then is there almost nothing in the IPCC report on the single most important figure in modern climatic research: the level of temperature rise expected from the greenhouse effect of CO2?Just as probabilistic weather forecasting has moved away from the “mental models” of frontal systems that used to be so key in communicating weather and weather uncertainty, so climate predictions are now largely projections of past trends without the detailed understanding or verification that is the necessary bedrock of science.

In contrast, to this robust science for CO2 greenhouse warming, the climate models also include massive “feedbacks”, which add up to 500% to the CO2 effect to make models fit past data. Their use is far from explicit and very opaque to the policy makers who use these models. The state of knowledge of these feedbacks is very immature and certainly not scientifically validated (Collins et al., 2006). Indeed there is strong evidence that  feedbacks are far smaller than those used in the climate models (Spencer & Braswell 2011, Lindzen & Choi 2011, Allan 2011, Asten 2012).

Given the known learning curve for numerical modelling and the lack of clarity in the model structure, climate models incorporating large scale hidden feedbacks appear to be unfit for use as a basis for policy decision-making. However, that is not true of the CO2 greenhouse warming. There is agreement on the 1°C rise even amongst climate “sceptics” (Haseler 2010b). But moreover, that is not the same as saying: “climate change is limited to 1°C”. We know from thousands of years of proxy climate records that the climate is inherently changeable (Stine 1996) and that indeed the only real certainty is that climate will change whatever we do.

So, if we ignore the contentious political argument of causality, we find agreement amongst knowledgeable commentators that there is a real possibility of significant climate change over the next century. There is no learning curve with such an assertion. Or, more accurately, unlike numerically based predictions which require 100s of iterations and so will take 1000s of years to be useful, we are now so far down the learning curve in terms of human science due to institutions like the Royal Society, that we can be very confident in the accuracy of this prediction.

Agreement does seem to be coalescing around the idea that whilst the certainty of man-made effects of CO2 may have been overstated, we should continue to research the potential range of climate scenarios and understand the potential risks, particularly where those risks are having a direct impact today or are reasonably short-term enough to give confidence in detailed predictions.

Political Environment

The Royal Society has always had an important role nurturing science (Royal Society 2011) and public debate on science. Indeed, it has also had a direct role funding research and e.g. was instrumental in the development of a routine air temperature and pressure measurements under its secretary James Jurin (Matthew & Harrison 2004).  But much of its work has been through subtle coordination of action and advising government through its role communicating with the public and the scientific community about science. That is to be encouraged.  But, whilst the recent meetings are key to engendering the development of climate science, they do so only within the wider economic and political environment.

There has been no global political consensus on CO2 reduction since Copenhagen in 2009 (Haseler 2012a). With the result that there was no agreement for the necessary amendment to continue the Kyoto commitment by the 3rd October deadline. As fig 7 seems to show the stock markets appear to have been anticipating the end of the Kyoto commitment on 31st December 2012.

Arguably, the end of Kyoto could be taken as a signal by some politicians that there should be an end to climate research. However as the meeting ably demonstrated, much climate and climatic impacts research not only has nothing to do with long-term climatic trends but is already having measurable benefits. Politicians, the public and perhaps most importantly the media, need to be educated about the difference between the “one golf club” scenario represented by: “global warming” (a long-term unverifiable single scenario with very contentious assertions) and the entirely reasonable and shorter-term focus provided by a range of regional scenarios. These shorter-term scenarios would not only include the effects of greenhouse gases, but might also include the climatic effects of changing land use to the real as well as the potential threat of a new Maunder type minimum as suggested by the recent drop in solar activity.

CONCLUSION

In an uncertain world of climate predictions, one prediction is certain: the Kyoto commitment will end on the 31st December 2012 and with it the icon of global political consensus to act on climate change. This is a serious threat to future funding of everyone doing climate research. It is notable that not one speaker mentioned this “elephant”, particularly when some speakers still framed their climate research as being part of this global political consensus to act on climate change.

Poor communication, seems to be a recurrent theme for probabilistic forecasters whether involved in day-to-day forecasts at the Met Office and BBC or longer-term climate forecasts. A strong candidate for the cause of this difficulty is that numerical projections fail to provide the kind of mental model that easily allows their results to be shared with others. However, this also suggests there may be a potentially dangerous lack of understanding of the “mental model” even within the scientific community. Institutions like the Royal Society are needed to encourage those involved to improve the  scientific basis of their models and encourage more intellectual validation of the science by vigorous critique rather than relying on numerical validation which as this papers shows would take 1000s of years to validate for the longer term climate predictions.

As an outsider, more familiar with the public debate on climate, it was surprising to find that at the Royal Society meeting so little of the research was contingent on there being “man-made” climate change. Except for some pseudo-commercial work, related to renewables, it would all deserve funding even if mankind were not responsible for recent climate change. Indeed, one message coming out of the meeting was the lack of funding in key areas. This “Valley of death” in funding means that research with a high degree of promise to save many lives in developing nations, is woefully underfunded and resourced and only seems to continue with the goodwill of the individual researchers involved. The public, politicians and particularly the media need to be made aware of this life-saving research and support for this research must continue irrespective of the political consensus on CO2.. The focus on man-made warming has not helped those when the problem in developing nations is any kind of climate change or extreme weather event.

So, hidden from policy makers, at the heart of  CO2 warming projections lies a high degree of consensus over the direct warming effect of CO2, which almost all knowledgeable commentators agree would cause a 1°C warming on top of natural variation. However the credibility of these projections is undermined by the immaturity of numerical projections, with the result that policy makers may see all projections as lacking credibility for their use. So, policy makers need to be made aware that:

  • there is a high degree of certainty that climate varies and the scale of this variation is known,
  • there is not one scenario of climate change but given the immature state we must consider many,
  • we know doubling CO2 will result in about 1°C of warming in addition to natural variation,
  • there is growing confidence in monthly and seasonal forecasts at a regional level which have a proven potential to save lives.

So, we should not only continue our efforts to forecast climate/weather at the present frontier of monthly and seasonal forecast which now shows so much promise but arguably they need more funding to increase their effectiveness as tool for decision makers. The uncertainty of the future does not mean climatic crisis in the future is uncertain. So whilst we can no longer rely on one projection of long term climate change and instead expect a period of many potential scenarios we must still prepare to take action.

Whether man-made or natural, climate change continues to be a serious risk and research on potential impacts whether man-made or natural in their origin, deserve funding. This is the message policy makers need to hear.

appendix a. miscellaneous formatting details

Skill level of forecast.
High Medium Low
Daily Current - -
Weekly 2132AD Current -
Monthly 2600AD 2078AD Current
Yearly 9300AD 3000AD 2235AD
Decadal 75,000AD 12,000AD 4400AD
Century 730,000AD 110,000AD 26,000AD

Table 2: Date by which forecast (rows) might reach stated skill level (columns). All figures except those in bold are approximate.

appendix B. Notes

Alan Turing, formerly resident at Chicheley Hall where the meeting was held, started his 1950 paper “Computing Machinery and Intelligence,” which with the words: “I propose to consider the question, ‘Can machines think?’”. He proposed a test where instead of a man and woman playing a party game where they are asked to respond to messages, that the computer takes on one of the roles.

Acknowledgements

Mike Haseler acknowledges financial support from members of the Scottish Climate & Energy Forum and particularly his family. Thanks must also go to the Royal Society for providing the background information on which this paper is based and to Mrs Elizabeth Berry who provided accommodation to the author at short notice so he could attend the Royal Society meeting.

References

Age Concern, 2010a. Excess winter deaths. Available at: http://www.ageuk.org.uk/pagefiles/2013/excess_winter_deaths_report_oct10.pdf [Accessed October 10, 2012].

Age Concern, 2010b. Tragedy of 23,100 extra winter deaths of older people makes new emergency plan imperative, says Age UK. AgeUK. Available at: http://www.ageuk.org.uk/latest-press/archive/tragedy-of-excess-winter-deaths-makes-emergency-plan-imperative/ [Accessed October 10, 2012].

Allan, R.P., 2011. Combining satellite data and models to estimate cloud radiative effect at the surface and in the atmosphere. Meteorological Applications, 18(3), pp.324–333.

Anon, IPCC Fourth Assessment Report: Climate Change 2007 (AR4). Available at: http://www.ipcc.ch/publications_and_data/ar4/syr/en/contents.html [Accessed October 9, 2012].

Asten, M.W., 2012. Estimate of climate sensitivity from carbonate microfossils dated near the Eocene-Oligocene global cooling. Climate of the Past Discussions, 8(5), pp.4923–4939.

Beardsley, E., Jefford, Michael & Mileshkin, Linda, 2007. Longer Consent Forms for Clinical Trials Compromise Patient Understanding: So Why Are They Lengthening? Journal of Clinical Oncology, 25(9), pp.e13–e14.

Bett, R., 2012. Discussion – “Climate communication” – what do you think? Bishop Hill. Available at: http://bishophill.squarespace.com/discussion/post/1856783 [Accessed October 9, 2012].

Bony, S. et al., 2006. How Well Do We Understand and Evaluate Climate Change Feedback Processes? Journal of Climate, 19(15), pp.3445–3482.

Chiara, M.L.D. et al., 1996. Structures and Norms in Science: Volume Two of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995, Springer.

Collins, W.D. et al., 2006. Radiative forcing by well-mixed greenhouse gases: Estimates from climate models in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). Journal of Geophysical Research, 111(D14), p.D14317.

Cook, M.J., Noyes, J.M. & Masakowski, Y., 2007. Decision Making in Complex Environments, Ashgate Publishing, Ltd.

Cornforth, R., 2012. Weathering the drought: Building resilience in the face of uncertainty. Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

Cortes, C. et al., 1993. Learning Curves: Asymptotic Values and Rate of Convergence. In NIPS’93. pp. 327–334.

Cullen, D.K., 2010. Famine in Scotland – the “ill Years” of the 1690s, Edinburgh University Press.

Curry, J., 2012. Climate models: fit for what purpose? Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

Curry, J., 2010. CO2 no-feedback sensitivity. Climate Etc. Available at: http://judithcurry.com/2010/12/11/co2-no-feedback-sensitivity/ [Accessed October 10, 2012].

DECC, 2010. COMMUNICATING CLIMATE SCIENCE: WHERE DO WE GO FROM HERE? Available at: https://s3.amazonaws.com/s3.documentcloud.org/documents/425698/microsoft-word-01dec639-communicating-climate.pdf [Accessed October 10, 2012].

Eom, S.B., Wen, H.J. & Ashill, N., 2006. The Determinants of Students’ Perceived Learning Outcomes and Satisfaction in University Online Education: An Empirical Investigation*. Decision Sciences Journal of Innovative Education, 4(2), pp.215–235.

Gaba, D.M., Howard, S.K. & Small, S.D., 1995. Situation awareness in anesthesiology. Human factors, 37(1), pp.20–31.

Gigerenzer, G. et al., 2005. A 30% chance of rain tomorrow: how does the public understand probabilistic weather forecasts? Risk analysis. Available at: http://library.mpib-berlin.mpg.de/ft/gg/GG_30_Chance_2005.pdf [Accessed October 8, 2012].

Guyon, I., 1997. A Scaling Law for the Validation-Set Training-Set Size Ratio, Berkely: AT & T Bell Laboratories. Available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.33.1337&rep=rep1&type=pdf [Accessed October 7, 2012].

Haseler, M., 2012a. End of Kyoto – A Perfect Storm for Scotland. Kyoto, Sunspots & Gales – The end of a dream for independent Scottish power? Available at: http://scef.org.uk/attachments/article/106/End%20of%20Kyoto%20-%20A%20Perfect%20Storm%20for%20Scotland.pdf [Accessed October 11, 2012].

Haseler, M., 2012b. The Sceptic View (Rev. 0.5). ScottishSceptic. Available at: http://scottishsceptic.wordpress.com/2012/05/06/the-sceptic-view-rev-0-5/ [Accessed October 11, 2012].

Hastie, T., Tibshirani, R. & Friedman, J., 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition 2nd ed. 2009. Corr. 3rd printing 5th Printing., Springer. Available at: http://www.stanford.edu/~hastie/local.ftp/Springer/ESLII_print5.pdf.

Hermann, H., 2011. How much CO2 really contributes to global warming? Spectroscopic studies and modelling of the influence of H2O, CO2 and CH4 on our climate. In EGU General Assemby 2011. Vienna. Available at: http://meetingorganizer.copernicus.org/EGU2011/EGU2011-4505-1.pdf [Accessed October 10, 2012].

Holmes-Rovner, M. et al., 1996. Patient Satisfaction with Health Care Decisions: The Satisfaction with Decision Scale. Medical Decision Making, 16(1), pp.58–64.

Jefford, M. et al., 2010. Satisfaction with the decision to participate in cancer clinical trials is high, but understanding is a problem. Supportive Care in Cancer, 19(3), pp.371–379.

Kearns, M., 1996. A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split. In Neural Computation. Morgan Kaufmann, pp. 183–189. Available at: http://www.cis.upenn.edu/~mkearns/papers/cv.pdf.

Lindzen, R. & Choi, Y.-S., 2011. On the observational determination of climate sensitivity and its implications. Asia-Pacific Journal of Atmospheric Sciences, 47(4), pp.377–390.

Lorenz, E.N., 1963. Deterministic Nonperiodic Flow. Journal of the Atmospheric Sciences, 20(2), pp.130–141.

Lorenz, E.N., 1969. The predictability of a flow which possesses many scales of motion. Tellus, 21(3), pp.289–307.

Lynch, P., 2008. The origins of computer weather prediction and climate modeling. J. Comput. Phys., 227(7), pp.3431–3444.

Matthew, H.C.G. & Harrison, B. eds., 2004. James Jurin. In Oxford Dictionary of National Biography: In Association with the British Academy: In Association with the British Academy. From the Earliest Times to the Year 2000. OUP Oxford. Available at: http://dx.doi.org/10.1093/ref:odnb/1517.

Molteni, F. et al., 1996. The ECMWF Ensemble Prediction System: Methodology and validation. Quarterly Journal of the Royal Meteorological Society, 122(529), pp.73–119.

Morse, A., 2012. Climate forecasting and health. Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

Mylne, K., 2012. Ensemble prediction of weather and its impacts. Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

New, M. et al., 2007. Challenges in using probabilistic climate change information for impact assessments: an example from the water sector. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 365(1857), pp.2117–2131.

Orasanu, J., 1991. Individual differences in airline captains’ personalities, communication strategies, and crew performance. Available at: http://ntrs.nasa.gov/search.jsp?R=19930043178 [Accessed October 9, 2012].

Palmer, T., 2012. Uncertainty in weather and climate prediction: Some introductory remarks. Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

Politi, M.C. et al., 2011. Communicating uncertainty can lead to less decision satisfaction: A necessary cost of involving patients in shared decision making? Health expectations : an international journal of public participation in health care and health policy, 14(1), pp.84–91.

Rahmstorf, S., 2007. Anthropogenic Climate Change: Revisiting the Facts. In E. Zedillo, ed. Global Warming: Looking Beyond Kyoto. Brookings Institution,U.S.

Raimondi, A., 2009. The communicative process of weather forecasts issued in the probabilistic form. journal of Science Communication, 08(01). Available at: http://jcom.sissa.it/archive/08/01/Jcom0801%282009%29A03/Jcom0801%282009%29A03.pdf [Accessed October 8, 2012].

Royal Society, 2011. Royal Society and Wellcome Trust partnership to nurture future world leaders in biomedicine | Royal Society. Available at: http://royalsociety.org/news/Royal-Society-Wellcome-Trust-partnership/ [Accessed October 11, 2012].

Sexton, B.J. & Helmreich, R.., 2003. Using language in the cockpit: Relationships with workload and performance. In Helmreich, R.L. Communication in High Risk Environments. Berlin: Humboldt Universitat zu Berlin., pp. 57–73. Available at: http://www.google.co.uk/url?sa=t&rct=j&q=&esrc=”http://tallbloke.wordpress.com/s&source=web&cd=2&ved=0CC4QFjAB&url=http%3A%2F%2Fhomepage.psy.utexas.edu%2Fhomepage%2Fgroup%2Fhelmreichlab%2Fpublications%2F369.doc&ei=DvVzUK2xEqef0QXL14GwAg&usg=AFQjCNE2SCs9msAp6PpSYO4FRD-ohMqzZg&sig2=2rkzLph06mwF236G_MlU9g” [Accessed October 9, 2012].

Spencer, R.W. & Braswell, W.D., 2011. On the Misdiagnosis of Surface Temperature Feedbacks from Variations in Earth’s Radiant Energy Balance. Remote Sensing, 3(8), pp.1603–1613.

Stanton, N.A., Chambers, P.R.G. & Piggott, J., 2001. Situational Awareness and Safety. Safety Science, 39, pp.189–204.

Stephens, L., 2012. An “80% chance of confusion”, or can the public make use of probabilistic weather forecasts. Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

Stine, S., 1996. Climate, 1650-1850. Aspen Bibliography, 2, pp.25–30.

Swan, K., 2001. Virtual interactivity: design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), pp.306–331.

Turing, A.M., 1950. I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX(236), pp.433–460.

Watts, A., 2011. Communicating uncertain climate risks. Watts Up With That? Available at: http://wattsupwiththat.com/2011/03/29/communicating-uncertain-climate-risks/ [Accessed October 10, 2012].

Webster, P., 2012. Sustainability through hazard anticipation and mitigation. Available at: http://royalsociety.org/events/2012/uncertainty-weather-climate/.

Whelan, T. et al., 2003. Helping Patients Make Informed Choices: A Randomized Trial of a Decision Aid for Adjuvant Chemotherapy in Lymph Node-Negative Breast Cancer. Journal of the National Cancer Institute, 95(8), pp.581–587.



Source:

Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

Total 1 comment
  • Mike – I Looked at your written presentation to the UK Parliament about AR5. You have a graph there where the temperature on the right hand side (after 1980) goes up. That upward slope is fake. It even has a name – “late twentieth century warming.” I discovered that it was completely phony doing research for my book “What Warming?” I even warned about it in the preface to my book “What Warming?” but nothing happened for two years. Then, suddenly, the big three oif temperature – GISTEMP, HadCRUT, and NCDC – decided to not show it any more. What they did was to bring their data into line with satellites that do not show this warming. It was done secretly and no explanation was given. The explanation is that my book exposed their scam. The true shape of that temperature segment is in Figure 15 of my book. There is more but it would require sending graphs. It is available on Amazon.

    Best regards, Arno

Top Stories
Recent Stories

Register

Newsletter

Email this story
Email this story

If you really want to ban this commenter, please write down the reason:

If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.