Daring to imagine


back to issue

IN this article, I attempt to explain how my early life as an undergraduate trainee and then research scientist at Cambridge University led me to ask questions about my own trade, science, and about the dominant understandings of ‘science’ in the modern world. I began this in an entirely uneducated way. It was only later, after retraining in sociological analysis of scientific knowledge in public arenas, that I began haltingly to understand why persisting with those questions led to my being seen as a ‘critic of science’. My critical focus was to exorcise its false pretensions as authority in society.1

‘Critic’ does not mean opponent, except of specifics imposed in the name of (an acquiescent) ‘science’. In particular here I want to indicate why both scientific and political and policy actors have mutually defaulted on their responsibilities to distinguish, as a public matter, several different social roles and identities of science in modern conditions of globalized finance-knowledge capitalism. This includes its roles in helping conduct a particular and problematic global politics by serving to conceal it.


One result of this is a lapse into a misconceived imposition of scientifically derived definitions of public meanings. Consider, for example, that the public meanings around the many technological innovations which inundate and shape society are only about ‘risk’ and ‘security’. Moreover, it is about ‘risk’ as artificially defined by institutional science. Such risk as it is given public meaning, is not found in nature. This leads me via a more rigorous and scientifically defensible treatment of risk as an intellectual-analytical public problem, to question the potent imagery or symbolic cultural capital which is used so indiscriminately by politicians (and scientist-politicians).

I am referring to the imputed essential openness, and cosmopolitanism of science. This presumptive cosmopolitanism is a major element of science’s claims to provide a truly global cognitive-therapeutic idiom for a troubled globalizing world. I wish to defend and champion science as a proper spirit of free, but disciplined and reflective curiosity. However, I challenge the incoherent and false pretensions of the dominant idea, used in contemporary global and national policy. I argue against the idea that ‘science’ can be looked to for any kind of ethical or moral authority or direction.

In important current public science domains we can see a systematic lack of readiness on the part of what is by its own practitioners called ‘Science’, to recognize and respect legitimate difference, and otherness. I use the central, definitive field of risk to demonstrate this problem. These problems occur because of scientific knowledge’s endemic limitations and contingencies – ignorance as its own ‘epistemic other’; and also because of its (mis)understanding of the basis of dissent by human others like publics. This double denial is a negation of a modern cosmopolitan ethic, and is no basis for a democratic global knowledge society. Science as we know it is now confronted by a major challenge, to learn how to inhabit and live human – and epistemic – difference.


It is richly ironic that just as the final year of The European Union’s self-imposed millennial decade surge to become ‘the world’s most competitive knowledge economy’ approached, unregulated rampant developed world financiers and speculators have given this utopian competitive dream’s eminently predictable failure, a final nail in its coffin – and also a potential alibi. Knowledge and finance have always been intimately related, since science has always needed, and courted, powerful patrons and financiers.

In recent decades they have combined in the joint inflation of several investment-promise bubbles, where the bursting of one seems to be followed only by the search for the next. There is never a search for a more rational and sustainable collective modus vivendi. While the more celebrated aspiration – even claim – of science has been, in Derek Price’s immortal words, to speak ‘truth to power’, there has always been a more recognizable routine role of speaking ‘truth for power’. Indeed so much has this dependency of knowledge upon finance and power intensified and embedded itself since the mid-20th century that the commercialization of the culture of science barely merits political or scientific attention in the early 21st century.


Along with this global crisis the urgency to invest competitively in scientific R&D for wealth generating ends, has come the age-old science policy question. Posed originally in the late 1960s by the godfather of the US nuclear programme, Alvin Weinberg, Head of the Oak Ridge National Labs, it asks where to invest it, across the multitudinous babel of scientific special pleadings and promises, especially when few outcomes, positive or negative, are predictable? Weinberg thus posed the question about how might we decide the directions which scientific R&D should take? Although he did not put it this way, Weinberg’s opening also implied the further question: What kinds of collective human imagination might shape those knowledge commitments?

At just this same period, I was training to be a physicist/materials scientist. After having gained a first class Cambridge University degree, and then returned to do a materials science PhD in early 1971, I was writing up. My supervisor asked me whether I wanted to continue with a postdoctoral research fellowship. I hadn’t given my future too much thought, but had enjoyed research and had done some unusual work using transmission electron microscope beam energies to anneal electrochemically deposited alloys, and examine the re-crystallization processes with the same electron microscope, along with X-ray diffraction pattern analysis. Add the flexi-time that research then allowed me to continue playing rugby and cricket as well as long weekends climbing in Scotland, and it was an easy choice. So, my supervisor invited me to put down a few ideas on paper, to talk them through a few days later.


This was the time of the first Middle East oil crisis, with the oil price inflating rapidly. For me, an obvious focus was to research the design of energy-efficient ‘smart’ materials and fuel-cells, and related questions. I elaborated on this theme for my supervisor. It was a novel idea at that time. I sat down with him, full of hope and expectation. By the end of our meeting, I was stunned and demoralized. None of my naive, but potentially worthwhile thoughts had registered any interest with someone who was a world leading electrochemist of materials.

That evening, I rang a friend who was a PhD student in the Cavendish lab (Physics) then just across the yard, and went for a post-mortem drink. Pete Chapman, who later became an energy efficiency specialist, was a much more politically aware scientist than a northern country boy like me. After my opening confused lament, he asked: ‘Brian, where do you think most of the money comes from to pay for your department’s research?’ I blurted out a reflex reaction ‘government, surely’ (meaning the then Science Research Council). Pete responded with a quizzical, ‘Yes, but isn’t Ministry of Defence also "government"?’

The innocent country boy scientist was being educated about the politics of research funding. Had I wanted to do research on the next generation of materials with useful military properties, there would have been money flowing like water. For the research ideas I was naively touting, it seemed there was nothing. The major funding flows which allowed our materials science research to thrive and lead the world, were mainly from what might (still) be called the military-industrial complex, to borrow US President Eisenhower’s 1950s term. My thoughts were just alien to this deeply entrenched, dominant imaginary, with all of its ‘natural’, routine material effects on funding, research, innovation – and the shape and direction of society’s technological and scientific resources, human and non-human.


I now realize, with nearly forty years of hindsight, how familiar that feeling of being an alien would become, repeated remorselessly in different contexts down the ensuing decades. At that moment, however, this shock to my naive world launched me on a radically unexpected different life trajectory. In having more or less taken for granted that I would do a postdoctoral research fellowship, and then presumably follow a lifelong scientific research and teaching career, I had accidentally discovered what many including Chapman and Weinberg already knew, that there was a mushrooming politics of scientific R&D which remained unannounced, mystified and uncharted.

I then ostensibly left this original agenda aside during the 1970s and 1980s for what looked as if it were the more directly public political and scientific field, of technological risks and the public controversies swirling around them. Thus after retraining in sociology, history, and philosophy of scientific knowledge, and doing research for an M.Phil in sociology of science (which Roy Porter my external examiner said could have been another PhD), I took up sociology of scientific knowledge, specifically in the growing number and range of public places where scientific knowledge was finding itself often under controversy. This dealt typically with issues of environmental, health and social risk. By social risk, I mean that risk is always and essentially a relational issue; and technological innovations which give rise to risk, disrupt, threaten and change the framework of possible social relationships in some way or another.


A key point here was that the technical definition of risk omitted and concealed the further relational point. In situations imbued with risks, citizens know they are inextricably dependent upon expert institutions for their adequate management. This ‘trust’ in such institutional expert agents is not a choice, but more a necessary virtual state. We are more-or-less forced to act as if we trust ‘them’.

Below, I elaborate further how this relational dimension of trust dimension multiplies in light of the endemic inadequacies of even the best scientific risk knowledge to reliably predict consequences. The biggest risks from nuclear power, for example, have always been those ‘soft impacts’. They deal with the kind of social relations – the society – which it implies, maybe to some significant degree requires. This includes, but is not exhausted by the sheer arrogance about their own scientific rectitude and superiority of nuclear risk analysis experts.

Yes, there are risks of multiple kinds which need assessing from ionizing radiations, but in my view these are not as important as those which arise from the global terror ethos of the whole socio-technical human enterprise of nuclear technology. As the historian of science and technology Spencer Weart documents, terror was not an external interruption into nuclear energy technology. Nuclear energy technology was a secondary afterthought of nuclear weapons, and terror was the heart of this technology from its very origins. Terror was the technology’s object. It followed the creed of MAD – Mutual Assured Destruction.


Thus early in my own personal career shift from research science to sociology of scientific knowledge in public, I found myself on an apparently different research and action trajectory from that addressing the politics of societal choices of scientific R&D. This research trajectory was more directly about scientific knowledge in public domains. I attempted to understand public controversies over technological and environmental risk, and the complex intersections between scientific knowledge with public concerns, knowledges and attitudes. Only in recent years have I begun to understand how what was an apparent detour into risk, was actually no detour at all.

Paradoxically, this realization was fuelled, at least partly, by applying fairly unexceptionable general standards of scientific rigour, in which I had been trained, to the intellectual and political field I had before me, both as an object of study and as a field of personal and intellectual engagement. I outline this logic next, and then return to the wider canvas. Here I show the shocking incapacity of ‘science’ to live up to its own self-proclaimed and supposedly uniquely civilized and civilizing virtues – of cosmopolitanism, self-awareness or self-reflexivity, and enlightenment open-ness to the unknown and the different.


These are, after all, the putative virtues which have, in the face of copious and often brutal contradiction, kept alive the powerful modern vision of science’s sole and superhuman capacity to hold together and save a splintering world, whose biogeochemical, let alone its social and cultural fabric, is showing signs of chaotic self-destruction. What ingredients for a possible future globally democratic and civilized knowledge society can we rescue from this blind and brutal frenzy?

Risk – from rigour to relationality: Why should risk be seen as always and inalienably social and relational? It is after all supposed to be quintessentially non-social. It is the central field of ‘sound science’ for rational policy, whether this be climate change; GMOs for developing countries and avoiding food starvation; mobile phones safety, and countless other such questions. Sound science for policy can and must be science which has kept the social strictly at bay. Let us examine this absolutely canonical proposition, as it has been elevated to be:

Risk is about the future, and assessing risks is about assessing possible future consequences of some action, such as the introduction of a new technology or behaviour. Historians, such as Francois Ewald, identify the origins of the idea of risk as an essential cultural component of modernity – realizing that fate did not necessarily rule, and that we could to some extent take responsibility for aspects of the future. One has to act now in relation to it. Ewald traces the emergence of the language and practice of risk assessment and management in primitive social arrangements, which like insurance, socialized future losses. He traces this to the early years of mercantile capitalism, in the 15th and 16th century European city states such as Genova and Venice.

It was from here that the early silk and spices traders with Asia set out and investors sent ships to ply the risky oceans with slave labour, in order to buy the materials which were to bring such vast profits in European markets. Since the individual losses would ruin any single entrepreneur if a ship was plundered by pirates or wrecked in a storm, they began to be socialized by primitive forms of insurance. This was a social innovation and not a technical one. The accumulation of capital was allowed to escape the bottleneck in which it would otherwise have choked.


Thus emerged the beginnings of what later became the huge, all encompassing global commercial empires of insurance and reinsurance, covering everything from life, to a sportsman’s knees, to our holiday arrangements and accoutrements – everything except nuclear power, nuclear weapons and GM crops. A defining and crucial feature of this social enterprise is the ability – or the belief in our ability – to predict consequences.

The mid-20th century engineering discipline of risk assessment or risk analysis and modelling came with similar aims and expectations. It focused on what could be circumscribed as more precisely defined deterministic mechanical dimensions – such as a nuclear power plant artificially abstracted from the much messier, far more distributed, more contingent and less controlled field of the full nuclear fuel and wastes life-cycle which the plant needed in order to be an actual, functioning technology.

This larger perspective, only hinted at here, allows one to see something which is invariably deleted, both by the professional expert disciplines involved, and by the public policy culture which wrought itself mutually in the image of these risk sciences. This is that risk assessment defines its analytical objects so as to reflect a complex and shifting combination of factors, such as: assumptions as to what counts as harm, which may not correspond with the legitimate evaluative commitments of others in society; beliefs as to what is measurable, thus ‘real’ to the conceptual and observational techniques of those disciplines involved; assumptions as to what secondary or tertiary variables may be useable as surrogates, when the primary ones believed to be salient lie beyond observation themselves; and assumptions as to what and where are the imaginable points of intervention and control in the system of those processes giving rise to those ‘harms’ as defined.


These sorts of processes have usually been combined in the murky, obscure and often unstated processes of framing the issue even before risk analysis could get to work. They are also effected, usually informally and unaccountably, during the scientific work itself. In its general policy communication of 2001, on its version of the Precautionary Principle, the EU has committed itself to the belief that scientific risk analysis is inherently precautionary, a commitment which seems founded on a woefully naive and false idolatry of science.

The point of this preliminary rehearsal is to note the artificial reduction of what is at origin a question of whether it is sensible and responsible to license some new proposed technology to be used wholesale in society at large, to an artificially reduced question of ‘what are the risks?’ This already excludes any question as to what objectives it is supposed to serve, and whether these objectives will realistically be furthered by this innovation? This can be simplified into questions over the social benefits which are promised, and to which sectors of society and under what conditions, assuming they are desirable anyway? These in turn give rise to questions of whether there are alternative ways of achieving the same social benefit?

‘Risk’ is thus a discursive framework which already influences a huge and sprawling politics of exclusion and externalization. Yet this politics is distorted and suppressed as a legitimate and necessary politics – over the social and political issues involved in alternatives, in benefits and their conditions for fulfillment, and if so achieved, their social distributions – by the wholesale imposition of risk as the only public meaning of ‘the’ issue. This imposition includes repeated processes of presumptive imposition of this as the primary or even only focus of public attitudes and concerns, within the meaning of ‘risk’ imposed by the ‘sound scientific’ institutional cultural framing.


Science – research, public authority and meaning: I want to underline a deeply problematic cultural character of scientific knowledge as it has come to play its various roles in public life since the mid-20th century. Three of these roles have been (i) as the perceived crucial resource, thus object of investment and ownership, in the commercial frenzy of technological innovation and intervention; (ii) in the fragile and anxious scramble for public credulity, trust and authority for the scientific expertise-defined public decisions which have licensed this mounting wave of technological innovation; and crucially, (iii) in the least noticed of its roles, by default providing public meaning for issues which are inevitably and rightly public issues involving science – and not as some wrongly call them, scientific issues. The last has in my analysis created many of those governance difficulties by provoking justified widespread public alienation from government scientific and policy expertise institutions.


This last element is not a familiar issue in the ways that the prior two can be assumed to be. What I refer to is its categorically limited capacity to assist engineers in imagining and creating strategically powerful technological resources, like the fission and fusion nuclear bombs and transgene agricultural technologies. I also refer to and its important ability to inform policy issues and decisions, such as whether we can reasonably infer from the evidence available – granted each single stream thereof is replete with contingencies, but adding together these separate streams of evidence to compose what is the most credible circumstantial account – that there is indeed human causal responsibility for potentially unmanageable2 climate change. These two roles of science are well-known and widely addressed.

However, the third one I list is not at all recognized. It has been buried by the dominant policy and scientific discourses of those public issues where the scientific role of providing authoritative advisory information to policy, and thus of justifying policy commitments which may be contentious, has been allowed to extend into the role of defining what the public issue actually is, or should be. In this sense, politicians and policy-makers have defaulted on their public responsibility, which is to hear all of the different streams of articulated public concerns, including ones which may not be directly or recognizably expressed in usual familiar policy terms, and articulate what the meaning of the issue is as a public policy issue.


Without needing or wishing here to take sides in the issue itself, I can use the example of GM crops and foods in Europe to illustrate this. GM crops and foods were assumed by commercial and scientific (including public scientific) actors to be the dominant trajectory for future food and agriculture globally. Investment in R&D, including public R&D followed suit, as did the process of securing commodity ownership of commercial intellectual property rights and exclusions. Differentiated and grounded plant breeding communities were broken up in reflection of this anticipated more technologically concentrated and standardized trajectory, thus (almost) destroying the basis of alternative innovation pathways in agriculture.

In reflection of the arbitrary nature of the historical evolution of regulation and policy over innovations and how society appraises them, case-by-case, since the ideological presumption in a capitalist market society is that any innovation defined by someone as in their interest to promote for societal use and dissemination, is by definition therefore a public good. The logic is that ‘private good equals contribution to public good; public good equals the accretion of all private goods.’ Thus there was never imagined to be any need to ask questions about the social benefit provided, or rather promised, by any innovation. The burden lay on showing any harm which it would impose on someone or their property interests. The sole agenda of regulatory appraisal was ‘risk’ prediction.


With genomics technologies especially in human domains this has been elaborated in recent years to bioethical questions, but there were no fundamental changes to the historically established framework I describe. Scientists were properly looked to for expert advice on risk questions, and so began the elaboration of countless institutionalized expert advisory committees, ostensibly giving advice to, but in reality exercising authority over, policy and political agents (government ministers). This has been called ‘science’, often with the normative amplification of demand for ‘sound science’.

It has, however, never operated like the ‘science’ which claims to operate with intellectual freedom from societal pressures and interests. As has long been documented and described, regulatory science is both mandated under political and legal frames, and often further, subjected to non-universal disciplinary cultural limitations. Both of these delimit the risk questions which can be asked, the evidence recognized as valid and salient, and also the ways in which those selectively recognized questions can be answered. Thus ‘risk’ as variably defined in such processes is given a meaning which may not at all correspond with other legitimate definitions of what might be deemed to be ‘at risk’.


Yet, despite these well-recognized partialities constituting the science of risk assessment, politicians and policy actors have effectively handed over agency to define public meanings – as ‘risk’ only – to scientist-advisers. Their own frames of reference have already been fixed and limited (i) by the terms of reference of their risk assessment role, and (ii) by their own diligence in attempting to enact their public servant role, thus imaginatively anticipating the public world to which they are contributing.

Consider, for example, that a government policy commitment (justified by reference to ‘science’), to for example GMOs as a policy and technical hegemonic trajectory, is a democratic given. This public world – its meanings, concerns, issues – has not already been defined by democratic agents. In these domains of innovation, technology, science etc, which increasingly make up the modern international public world, the supposed representative political agents of democracy have deferred (too far) to science. They have effectively allowed the scientist-policy actors by default to take over the democratic political responsibility to define public meanings.

From direct experience, and thanks to the training of scientists, their capacities do not stretch far enough to play this role properly. Many of them would be the first to affirm this.4 Yet, in practice, when faced with, defending the ‘advice’ they have given, they repeatedly define public opposition as due to public ignorance of risk, and reproduce the ‘risk’ only public meaning of the issue. Given their scientific training, this scientization is understandable, but that does not make it legitimate, enlightened, or politically constructive.

They have never recognized – and the politicians have never corrected them – that for the public the issue might not be as they the scientists have assumed. They have never recognized that the democratic world may be composed of publics with legitimately different meanings, concerns, questions and knowledges different from ‘science’.


The starkest example of this perversity can also be taken from the European experience of public conflict over GMOs. As has been well-trawled by endless commentary and social scientific work, though not yet adequately understood, public opposition to the government, industrial, science policy, and scientific advisory establishment commitment to GM crops and foods, began to be mobilized from the mid-1990s onwards. It was encouraged by multi-pronged NGO critique of the risk-safety justifications being expressed and the selective arrangements for arriving at them. All this public opposition, as it proliferated, was joined by aggressive tabloid media campaigns that went international. These were brushed aside by government, science and industry as being founded only in misunderstanding and rejection of scientifically affirmed factual reason.

This public deficit model explanation echoed identical establishment responses to widespread public opposition to nuclear power in the 1970s, and to various similar opposition to other innovations licensed as ‘safe’ by science. As a fixed conviction for science, government and industry, the issue was risk; thus public opposition could only be imagined as inspired by their disputing the risks as assessed by science. Since the scientists believed they were correct in their risk field, they could argue that this was a conflict due only to public ignorance, misunderstanding and anti-science, egged on by exaggerating NGOs and sensationalist media.


Starting from about 1997, social science research, including my own, showed that – public deficits of knowledge aside – the opposition was largely inspired by the typical public feeling that its central concerns were being misunderstood and ignored, even misrepresented. Yet as the opposition attempted patiently to communicate this dislocation of meaning to policy and scientific experts, and the falsehood indeed counter-productiveness of their deficit model rationalizations of the steadfast opposition, we saw a brief moment of success as the point registered. But we then saw the same basic public deficit explanation being reborn in a new form almost in the same breath!

Thus public deficits of understanding of scientific content (‘Do non-GM plants also contain genes?’) were succeeded by public deficits of understanding of scientific process (‘Does scientific method produce certain knowledge, thus risk-freedom?’). I soon began to realize that the public deficit model explanations of opposition were actually symptomatic. They were expressing something tacit, beyond their direct discursive object. This seemed to be that the authorities, acting in the name of science – whether governmental, scientific or industrial – were unable to recognize that their own framework of meaning was limited. This excluded some meanings, for example inter alia about the alignment of GMOs with the attempt effectively to concentrate control of the global food supply in a handful of global mega corporations.


Difference manifest, science unsighted: However, another public meaning which social scientific research heard spontaneously expressed by international (including US) publics, was much closer to the conventional scientific ‘risk’ definition of the issue. Yet, it was a fundamentally different issue in principle. This was the widespread public concern about the likelihood of unpredicted consequences from releasing irreversibly countless living and reproducing organisms to the environment.

Risk by definition does not include this question, and by definition even the very best scientific risk assessment cannot address these questions. Risk assessment is founded (in a selective way) on posing questions, and predicting possible harms and their probabilities, from what we know. Unanticipated consequences arise from the ignorance which always lurks beneath and within scientific knowledge. The institutional regulatory and risk assessment rule has simply been to exclude this ignorance issue from responsibility. Unknowns do not exist; their presence is denied.

Legally this is reflected in the common law principle that negligence can only be defined in an action which neglected the state of scientific understanding (about likely harms and means of avoidance) at the time of the decision to so act. Yet, if laboratory scientific knowledge is de facto being translated into large-scale commercial practice more and more rapidly thanks to commercial competition, then the chances of unpredicted consequences occurring are – all else equal – likely to be greater, as a result of human action, nothing else.


Ordinary citizens across Europe, and in the US, expressed concern about the (possibly longer term) unpredicted consequences which the haste to commercialize GM science into crops and foods through the environment might bring about. They must have been provoked by the authorities’ repeated refusal to acknowledge that concern, and instead to impose on it a risk-only meaning. This they did by routinely and only referring to risk assessments as the scientifically-revealed, factual answer to public concerns.

That ‘risk’ and ‘risk assessment’ here had socially constructed meanings5 which were themselves under contention, was never recognized. Instead the scientific and policy institutions in effect denied that scientific knowledge in this field was beset with ignorance, and so denied the very condition which ordinary citizens quoted example after example from evidence based historical public experience – PCBs, CFCs and stratospheric ozone, thalidomide to name some – to indicate their different fact based concerns from those of risk with which they were insistently shackled by the experts.

What we see here is a twin process in which science-and-policy, mutually constituted as outlined above, showed itself unable to recognize and respect the other, or difference. Unpredicted consequences arising from ignorance, or non-knowledge as Beck described it, represent and impose surprise. It becomes a constitutional element of such issues involving but not exclusively risk questions, and not as an ephemeral public-emotional dimension.

Trust in the authorities is a centrally salient question. If people are – against authoritative scientific denial – aware of the likelihood of unpredicted effects, then they are also aware of the chances of surprise consequences. In such circumstances it is only logical to ask, who will be in charge of policy responses to such surprises, and can we trust that they will act in the public interest? The track record shows those authorities to have been in a state of denial over a key issue. They have also systematically denied – indeed misrepresented and belittled – key public concerns. They can hardly expect to enjoy much public trust in such conditions.


Ignorance and the surprise which accompanies it can be seen as an epistemic other. It is difference manifesting itself as an unknown set of realities, acting themselves as unknowns and beyond our control (but not beyond our responsibility), into a world we thought we controlled. The public as citizens give civic support to mobilized opposition to GM promotion and implementation on grounds which the science-obsessed and sycophantic authorities could not recognize as just ontologically different. These are different meanings, not just failed understandings from within the same world of meaning. Instead, the myths are allowed to continue in a so-called democratic knowledge society, that opposition to GM crops and foods in Britain and Europe was solely due to public irrationality, media irresponsibility, and NGO exaggeration.

Science as torch-bearer for a cosmopolitan, democratic global knowledge society? We are in serious need of some reflection; and some different – scientific and political – imaginations.



1. It is interesting that in his recent book on The Politics of Climate Change (Polity, 2009), Giddens indulges in a gratuitous critique of ‘the green movement’ and their favourite policy principle, that of precaution – which he describes as: ‘don’t interfere with nature’. This is quite unrecognizable. Giddens also notes, this time correctly, that a key position of the Greens is not anti-science but anti-scientism – they are exponents of science, to which the PP is not opposed even if Giddens’ fantasy version would be. However, Giddens then fails to develop this central science-scientism distinction with respect to climate change.

I hope readers will recognize the same ‘green’ distinction here (it is not at all exclusively green, but is much more general), in relation to risk science, what it serves to deny, externalize and conceal, and what it thus serves to impose on what is then with perverse irony called ‘knowledge society’.

2. I mean unmanageable here in the sense of human society’s inability to respond to climate changes without major social and human harms of whatever kind. That is, I do not mean to imply that the climate was and ever will or can be manageable in a conventional sense of directing its forms and changes.

3. There is a neat – if perverse – illustration of the SSK co-production point here, as the deliberate policy destruction of the UK public plant breeding institution in the late 1980s left little or no remaining grounds for developing and deploying alternative plant genomic strategies like marker-assisted selection, which would have required differentiated plant breeders informed by plant genomics but also close to diverse locally grounded practical agronomic conditions. Thus by the mid-1990s social-institutional as well as scientific conditions had been structured in correspondence with the view that the only viable techno-scientific future for UK agriculture was transgenic (GM) seed technologies.

4. Several times in personal exchanges with leading UK scientist advisers, including a government chief scientist, two ministry chief scientists, the head of a key government safety agency, and the chair of one science advisory committee, I have been regaled with straightforwardly behaviourist normative accounts of how publics respond to risk related issues on personal lifestyle as well as government commitments like food regulatory matters.

5. Which does not at all mean they were simply made up as ‘facts’ from social interests alone, as science-wars protagonists like to misrepresent this position.