RECENTLY, I asked my seminar students at Harvard Law School whether there were any circumstances under which they would say no to innovation. The class looked stunned and was silent. The student I was directly looking at when I put my question, quietly but decidedly shook his head.
This happened in the spring of 2009, on a day when the Dow Jones Industrial Average fell by nearly 300 points, and the world was in severe recession as a result, in part, of ingenious but unregulated innovation in the financial market. At the heart of that story were innocent-seeming devices – with bland, opaque monikers like CDOs (collateralized debt obligations) – that many economists saw as a clever way for investors to take advantage of fluctuations in people’s debts, payment schedules, and interest rates.
Mathematical models had created value out of expectation, not for the first or last time: only in this case the calculators had not reckoned with the fact that incentives were skewed in favour of uncontrolled speculation, and that unprecedented masses of debt, sliced and exchanged like hard currency in international markets, could have been incurred by people with not even a scintilla of ability to pay. Only when the markets crashed were the innovations that had fed the preceding boom years shown to be no better than empty promises, the toxic waste of our hyper-consuming societies, needing to be re-internalized at vast public expense.
To be unequivocally sanguine about innovation under these circumstances seems optimistic to say the least. But skepticism rises when we look at the relationship between innovation and risk-taking in this case. Not all countries and not all banking systems suffered alike from this particular boom-bust cycle. Iceland, a small but proudly independent Nordic country, was in the lead to buy the promises and among the first to feel the pain. In an era of neo-liberalism, Icelandic banks had enthusiastically expanded their international debt holdings while foreign investors, including many British local authorities, had deposited hundreds of millions of dollars in Iceland to take advantage of the country’s exceptionally high interest rates.
In the process, Icelandic currency had become one of the most overvalued in the world. When the markets crashed, Icelandic banks fell far short of being able to repay their depositors and the discrepancy between the real and imputed value of the krónur became apparent as a classic ‘bubble’. All three Icelandic banks collapsed, and the idea of nationalization, virtually taboo at one time, came roaring back as the government was forced to restructure and assume control of the country’s major banks.
India too was not immune to the global shocks of 2008-2009, but Indian banks avoided the crisis that gripped their counterparts in much of the West. It was a tortoise and hare story in modern clothes: Indian banks that had seemed too stodgy to keep up with high-risk, high-yield global innovation looked in hindsight careful and prudent. On inquiring into this reversal of fortune, a New York Times reporter was told that the answer lay in a complex conservatism that implicated all of Indian society, from families to high-level regulators. One interviewee attributed the staidness of Indian lending habits to the existence of parallel credit systems: ‘Savings are important. Joint families exist. When one son moves out, the family helps them. So you don’t borrow so much from the bank.’ Interestingly, this informant focused on borrowing, not lending, as the salient social practice, and cited engrained cultural habits, not mathematical formulas, as determinative.
Others noted that Y.V. Reddy, Governor of the Reserve Bank of India in the years before the crisis, saw his job as curbing bankers’ in-built greed for short-term, high-risk, money-making schemes. As a result, Indian banks were held to more stringent standards and kept from buying into the innovations that fed the bubble. Their short-term profits suffered, but by late 2008 one of Reddy’s former critics in the banking sector was prepared to say, ‘He saved us.’1 Writing in March 2009, the economic analyst and Nobel laureate Paul Krugman made a similar observation about the US financial system before it was gripped by what he calls ‘the market mystique’: ‘It all sounds primitive by today’s standards. Yet that boring, primitive financial system serviced an economy that doubled living standards over the course of a generation.’2
Politicians, publics, economists, and other social scientists will long ponder what went wrong in the lead-up to the worst financial crisis the world has known since the Great Depression of 1929, and it is not my object to dwell on the specifics of this catastrophe. But these events pose a general challenge to those who study the relations between knowledge and society, in particular, those concerned with innovation, risk, and its consequences for social inclusion and social justice. For, among all of its ramifications, the economic crisis of 2008 raises enduring questions about the role of publics in processes of innovation, the power of expertise, the accountability of the state, and, at the broadest level, the responsible governance of science and technology.
Banking technology in this case turned out to be a system out of control. Why it was more or less so in different countries – Iceland and India, for example – may yield fascinating insights. Were the downsides of economic innovation better governed in one country than another? If so, was it because states balanced the needs of citizens and the appetites of financial institutions differently? What does good governance of innovation mean anyway, and by what criteria should we assess whether one framework works better than another?
Such questions have acquired a new edge in this era of globalization. They have been addressed before largely as matters of national politics, against the backdrop of engagements between science, technology and society in individual nation states. Now, as all three of these systems are increasingly on the move, tumbling out of old geopolitical boundaries and legal systems onto the unruly global stage, power and responsibility need to be confronted again. Who innovates in today’s complex, technology-infused, globally networked societies; for whose benefit and at whose cost; whose innovations travel; and how can we ensure that the ability to remake the world through science and technology is subjected to meaningful democratic debate?
This essay, the outgrowth of encounters between Indian and western academics at the turn of the year (see Esha Shah’s contribution in this issue), focuses on the problem of producing innovations that cross frontiers, and the special problems that these raise for responsible governance. It looks beyond the conventional wisdom that sees all innovation as a public good because, after all, scientific and technological developments are designed to increase knowledge and improve the human condition. Instead, I look at the contrast between the close and cooperative relations that have developed between science and the state and the absence of correspondingly open, inclusive mechanisms through which those actors communicate with wider publics.
It is these latter connections that need to be repaired and restored, or reinvented if need be, if scientific knowledge and its technological applications are to be harmonized with the public good on a global scale. This poses a twofold challenge for the institutions of democracy: how, when confronted by the prospect of new, life-changing technologies, should publics be consulted about their preferences; and, when innovation travels across the historically defined boundaries of nation states, how should those on the receiving end be included in governing the changes that will affect them?
Part of the answer is to recognize that science and technology – for all their power to create, preserve, and destroy – are not the only engines of innovation in the world. Other social institutions also innovate, and they may play an invaluable part in realigning the aims of science and technology with those of culturally disparate human societies. Foremost among these is the law. In what follows, I will make a plea for reintegrating the dissecting, disintegrating, and ultimately undemocratic imagination of science with the ordering, integrating, and empowering imagination of the law.
Even in pre-history, technological entrepreneurs allied themselves with state power: the Trojan horse enabled the Greeks to penetrate the impregnable walls of Troy, and the legendary architect Daedalus built the labyrinth for King Minos of Crete to contain the ungovernable Minotaur. But science and technology policy as we know it today emerged only after World War II. Newly decolonized developing nations and their erstwhile colonizers, the leaders of the industrial revolution, followed somewhat different policy pathways, but there was wide agreement on the principle that public investments in science and technology are good in themselves.
Occasional demonstrations of effectiveness were needed, as for all public policies. Publicly supported science could not remain a pastime for the detached intellect or be done merely to satisfy individual curiosity. Justification came most easily when everyone could see what was at stake and agree on research to meet those needs. Science and the state throve especially well in the Hobbesian Cold War environment. On both sides of the Iron Curtain, the Leviathan needed knowledge and technological instruments to ward off chaos. Publicly supported research programs proliferated around life’s basic necessities: food, health, water, energy, transport, communication and pre-eminently, national security.
In the United States, it was not only the atomic bomb that prompted a new obsession with state-sponsored innovation. It was also a cornucopia of other wartime inventions that promised health, prosperity and increased employment: pesticides and antibiotics, radio and aviation, plastics and synthetic fibres, fertilizers and hybrid plant varieties, and of course the atom’s peaceful postwar incarnation, nuclear energy.
Fund science, and it will repay us with inconceivable benefits. That was the theme of a report, evocatively titled Science – The Endless Frontier,3 written by perhaps the best known and most influential of all US presidential science advisers: Vannevar Bush, who coordinated the scientific side of the US war effort from 1941 until its end. Responding to a request from President Franklin Delano Roosevelt, Bush in July 1945, a few months after FDR’s death, delivered to President Harry Truman the blueprint for a new national funding agency for science.
Bush’s report outlined a de facto contract. It built on the successes of the previous years to promise more of the same, provided that national goals and priorities were appropriately directed. The balance of research had tilted, in this knowledgeable observer’s view, too far toward industrially sponsored science. Federal funds should therefore be targeted toward upstream knowledge creation in universities and other centres of basic research, which Bush defined as research ‘performed without thought of practical ends.’ In return, science would reward the nation with endless discoveries and a highly trained work-force.
For this contract to work, science would need autonomy to develop its own priorities and control its own research agendas. All these elements converged into what we may call the central dogmas of postwar US science policy: more basic science means more innovative applications, that is, technologies; more innovation means greater social welfare; and such welfare is best secured through federal support for an autonomous, largely self-regulating system of basic research.
In Bush’s rigorously linear imagination, the idea of governing innovation presented few difficulties. Basic research, the prime beneficiary of public funds, would be largely self-regulating. Merit-based review, relying on the scientific community’s highly developed, communally enforced criteria of excellence, would set priorities for allocating public money. Private initiative, driven by corporate profit motives, would select from the resulting pool of public knowledge the choicest discoveries leading to the greatest good; on certain large collective issues, such as national defence, the state would take over the development from research to technology.
Though Bush’s report did not deal with the delivery of knowledge from lab to market, it was understood that the state would intervene again before new products hit the market. Regulation would then step in to weed out goods and services that did not meet formal, risk-based tests of safety or efficacy. Legal and institutional arrangements transformed this conceptual scheme into administrative reality. In the United States, promotion (labelled ‘policy for science’) and regulation (using ‘science for policy’) of technological innovation were, by the 1970s, often placed in different executive agencies, each equipped with its own issue framings, analytic approaches, lobbies, and micro-politics. Conflicts about the results of innovation were usually left to be resolved by litigation, often late in the game, when advances in research had already entered into commerce as tangible inventions.
By the turn of the last century, it was clear that neither the linear model nor the accountability structures built upon it could adequately deal with increasingly complex questions of governance. Fifty years after the Bush report, which presented an idealized and simplified picture even in its own time, the modes of production of scientific knowledge and technological innovation have changed almost past recognition. The distinction between basic and applied research is widely dismissed, with many admitting that the legitimacy of public funding depends more on demonstrations of economic and social utility than on peer-confirmed assertions of scientists’ creativity. The strategy of using technology to sell science (Bush’s ploy in 1945) is now the norm for political leaders.
A more forthright, indeed instrumental, policy discourse admits the interplay of political motives, knowledge gains, and material payoffs at all stages in the public funding of research. At the same time, many factors have combined to weaken both scientific self-regulation and national oversight, the linchpins of governance under the old social contract for science. These include the intensifying links between science and corporate interests, the international mobility of the scientific and technical workforce, the appearance of new national centres of innovation, the growing off-shoring of parts of the supply chain by wealthy countries, and the competitive targeting of global markets by innovative manufacturers.
Following an influential group of European science and technology analysts, the resulting more complex account of technoscientific production is often called ‘Mode 2 science’. It has the following characteristics:
* Knowledge is increasingly produced in contexts of application (i.e., all science is to some extent ‘applied’ science).
* Science is increasingly transdisciplinary, that is, it draws on and integrates empirical and theoretical elements from a variety of fields.
* Knowledge is generated in a wider variety of sites than ever before, not just universities and industry, but also in other sorts of research centres, consultancies, and think-tanks.
* Participants in science have grown more aware of the societal implications of their work (i.e., more ‘reflexive’), just as publics have become more conscious of the ways in which science and technology affect their interests and values.4
The rise of Mode 2 thinking can be tracked, in part, through the actions and discourses of policymakers. President Barack Obama’s inaugural promise to restore science to ‘its rightful place’ was at one and the same time a promise to harness science to address big public problems, such as energy and climate change.
Across the western world, states have adopted policies to promote problem-centred research and to make researchers more conscious of their obligations to society, for example, through greater focus on salient social problems, increased outreach across disciplines, collaboration with user communities, and active solicitation of intellectual property rights for ‘basic’ research findings. The proliferation of such hybrid concepts as ‘dual-use technology’, ‘mission-oriented research’, and even ‘advocacy science’ signal that the old dichotomy between basic and applied science – the former disinterested and university-based, the latter attached to industry and driven by profit motives – has been replaced by an altogether more complex mix of incentives, institutional partnerships, lobbies, and driving special interests.
The easy equation of science with progress has also proved hard to sustain. First, while postwar America celebrated science and technology for vanquishing both human and environmental enemies, European responses, conditioned by war’s devastation and more immediate exposure to the machineries of death, remained significantly more ambivalent.5 Second, rapid developments in the life sciences awakened ethical concerns about the human capacity to intervene responsibly in the redesign of nature. Third, technologies both old and new brought unintended consequences that negated claims of unambiguous social advancement; the promissory phrases of the postwar period, like ‘science – the endless frontier’ or ‘electricity too cheap to meter’ failed to yield full dividends.
The Green Revolution eliminated hunger in many parts of the world but aggravated some social disparities and increased environmental degradation. Contradicting assertions of safety, the chemical industry experienced its most lethal disaster with the Bhopal gas leak of 1984, while nuclear power displayed its deadly force with the 1986 reactor explosion in Chernobyl. Chlorofluorocarbons, widely introduced as non-toxic refrigerants and propellants by the 1960s, were found in the 1970s to be eating away the stratospheric ozone layer. And by the 1990s, the family car, possibly the best loved product of industrialization, was implicated in the prospect of catastrophic climate change.
In the early years of the 21st century, then, the assumptions that underpinned the postwar social contract have lost their axiomatic status. The production of new knowledge is increasingly tied to social purposes, often shaped and steered by private interests or by the political concerns of the moment. The power to frame research that is good for society no longer lies exclusively or even primarily within the control of national public institutions, and, caught in the cross-fire of economic and political competition, even science has lost its univocal power to declare what counts as good or sufficient knowledge.
It is clearer than ever that innovation carries unavoidable risks, neither wholly foreseeable nor precisely calculable in advance; and these risks extend beyond harm to health, safety, the environment, and the economy. They include threats to long-established values and cherished forms of life. Human nature, and why it is worth holding sacred, is once again at the centre of conversations, with organized religion sensing opportunities in spaces that the march of science has left dangerously evacuated. These circumstances underscore the need to revisit the principles and practices of accountability by which democratic societies seek to ensure that the knowledge they pay for will serve desirable public ends.
That inquiry need not begin with a blank slate. Throughout their short history, modern nation states have acknowledged the need to manage shocks and surprises in citizens’ lives, including some of the adverse consequences of technological change. Three mechanisms for ensuring accountability deserve special attention: the market, regulation, and ethical deliberation. Each is important to any discussion of democratizing the governance of innovation, because each has an associated model of how to involve publics in the innovative process. More important, each conceives the public itself in terms worth dissecting. At the same time, each also has well-recognized limitations and pitfalls that provide a starting point for further reflection.
The market embodies the ideal of direct democracy. It endorses the view of humans as enlightened, rational choosers, able to act in advancement of their own good. It seeks to combine the liberal ideal of personal free choice with the logic of utilitarianism, that is, the trade-off of risks to the few in return for benefits to the many. In the market framework, aggregated individual preferences determine what will or will not be accepted by society, how much will be produced, how it will be distributed, and at what cost. Organized consumer power serves in the idealized marketplace as a check on capital’s arrogance – whether through active campaigns and boycotts or through passive avoidance of what capital produces.
The ‘grape boycott’ by which the United Farm Workers Union allied with American consumers to secure the rights of California’s migrant workers still stands as an iconic achievement in US labour history. The spectacular failure of brands from Ford’s Edsel motor car in the 1950s to ‘new Coke’ in the 1980s attest to the enduring force of consumer preferences in the reception of products. The massive rejection of genetically modified (GM) crops by consumers in Europe and elsewhere signalled to US producers that they would have to change their production and marketing strategies and step back from their hard line against the labelling of GM foods. More generally, the rise of corporate social responsibility, as both slogan and practice, reflects the private sector’s awareness that consumer preferences extend today to the means of production as well as its ends.
Market-based approaches to managing innovation received an adrenaline boost with the fall of the Iron Curtain and the ensuing euphoria in the West, memorably captured in Francis Fukuyama’s phrase ‘the end of history.’6 Subsequent years heightened the deregulatory fervour that had already set in during the administrations of Margaret Thatcher in the United Kingdom and Ronald Reagan in the United States. It took the seemingly bottomless financial crisis of 2008 to reveal yet again the weaknesses of the market model.
Briefly, the market showed that it has no means to correct poor design choices based on partial or faulty information, nor to compensate for human error or bad intentions, such as ruthless cost-cutting or unbridled acquisitiveness. Markets no less than bureaucracies, moreover, may be subject to lock-ins and tunnel vision based on past practices that were never subjected to adequate scrutiny, as in the case of novel debt-backed financial instruments that led to the stock market collapse of 2008. Consumers, who enjoy immense power to express their preferences once products are put before them, are virtually powerless to intervene upstream in the processes of industrial production, including the all-important phase of design, when decisions are made about what kinds of things will be put in circulation. Accordingly, markets provide little or no democratic control over the pathways that innovation follows; consumer choice sets in only when goods and services are already before us. All these problems, gravely troubling even within nation states, are magnified beyond measure when markets function globally.
To make markets behave responsibly, it takes regulation. This is not a novel insight. Politicians, policy-makers, and economists have all offered arguments for regulation to correct for so-called imperfections in markets. These range from forcing producers to internalize negative externalities, such as the invisible costs of environmental pollution and resource depletion, to creating open information systems to facilitate meaningful consumer decisions. For our purposes, regulation is equally important because it entails its own brand of democratic process that precedes and, in some respects, supplements the controls exercised through consumer choice. If markets emulate direct democracy, then regulation incorporates a deliberative model, based on the ideal of humans as reasoning beings.
The legitimacy of regulation depends on the responsible use of authority by bureaucrats who are accountable to elected officials and the public. That accountability in modern societies increasingly takes the form of reasoned decisionmaking, with administrative agencies called upon to explain both why and how they want to act. The public display of rationality in turn requires regulators to rely on experts, who offer specialized knowledge and skills, as well as modes of interaction designed to promote impartiality. Regulatory proceedings also offer space for public engagement through notice of proposed state actions, hearings or consultations, and the opportunity to challenge decisions seen as unreasonable or unlawful. At its best, regulation addresses types of innovation that the market never touches, such as state-sponsored development projects that may entail significantly negative environmental or welfare consequences for marginal populations.
Despite its insistence on rational decisions, its sensitivity to hidden externalities, and the opportunities it offers for public reasoning, regulation too poses problems as an instrument for the democratic control of innovation. Like the market, it comes into play only after important design choices are already in place, and like the market it operates only within prior framings that establish which issues are up for contestation and which are not. Regulatory framings are closely aligned with producers’ imaginations, driven by exaggerated dreams of progress and hopes of gain. Fear and doubt tend to take the back seat, and – beyond the confines of market research – little if any attention is paid to the social contexts in which innovations ultimately come to rest.
In the early history of crop biotechnology, for example, risks to human health received more attention than the far-reaching implications of genetic modification that eventually dominated critics’ attention: implications for biodiversity, resistant strains, animal welfare, food security, subsistence farming, and ownership of biological materials. With their minds, as well as their legal mandates, fixed on human ingestion and environmental release as primary and secondary concerns, risk analysts overlooked the longer-term social and political consequences of revolutionary changes in agricultural practice; even the precedent of the Green Revolution sounded no special alarm bells.
Attempts to raise those wider issues, moreover, faced an uphill struggle, because the discursive playing field had already been stacked against dissent. Support for biotechnology was cast as the enlightened view, consistent with progress; opposition, almost by definition, consigned the opponents to the disreputable company of Luddites and science deniers. Particularly in the United States, where the possibilities of gene manipulation introduced a wave of technological enthusiasm, analysts working within the frame of scientific risk assessment found no reasons to put the brakes on innovation. Contrary attitudes were represented as ignorant, misguided, fraudulent or irrational – hence not to be taken seriously for policy purposes.
The third major framework through which democracies govern innovation is ethics. This framework conceives of citizens as holding values that need to be clarified and factored into governance decisions. An influential effort to bring ethics into innovation was the US programme to study the Ethical, Legal and Social Implications (ELSI) of the Human Genome Project. Since its inception, the ELSI model or its near equivalents have been added to the assessment of other pathbreaking technologies, such as nanotechnology and synthetic biology. The same public programme that pays for research typically also pays for associated ethical analysis.
On its face, the concern for ethics addresses and seeks to rectify serious flaws in both market-based and regulatory approaches. Whereas the market privileges efficiency and regulation privileges rationality, ethics seeks directly to access people’s moral values and give them expression in the development of science and technology. Under the rubric of ethics, people can, in theory, voice preferences that are rooted in culture and collective experience rather than in science and economics. Ethical analysis also tends to occur relatively early in pro-cesses of innovation and thus may have influence before irreversible public commitments are made. In these respects, ethical analysis counters the deterministic forces that drive technoscientific innovation along seemingly inescapable tracks.
Ethical deliberation as currently practiced, however, has weaknesses that detract from its effectiveness as an instrument of democratization. First, the values foregrounded in such proceedings tend chiefly to centre on individual rights, such as autonomy, privacy and bodily integrity. Questions of public value, such as the potentially unequal social distribution of the benefits of innovation, are not as a rule included in ethical analysis. Nor are inter-subjective values, such as the virtues of continuity, communal stability, or established kinship relations.
Second, as with most forms of regulatory analysis, ethical evaluation has become professionalized, as matters for elucidation by expert ethicists instead of by untutored, and potentially unreasoning, publics. Such professionalization tends to narrow the range of concerns that reach the deliberative table. In particular, there is little or no space for hesitation arising from ambivalence, uncertainty, lack of confidence in ruling expertise, or a simple desire for caution.
Third, as relatively new additions to the apparatus of decision-making, ethics bodies operate in a twilight zone in which normal rules of representation and transparency are not in play. For example, such bodies are often voluntary, constituted at the level of firms or universities, so that neither their membership nor their proceedings are open to public view as in the case of statutory scientific advisory committees. Even the presidential advisory bodies that have gained prominence in many western nations serve at the pleasure of the particular politician who convened them and reflect the biases of the administration. There is generally no mandated tie-in between ethical advice and policy responses.
The problems of the market, regulation and ethics point to deeper contradictions between the role of states as sponsors of scientific and technological development and the role of states as custodians of the public will, especially at a time when the relationship between states and their citizens is itself in flux. The old social contract that equated support for science with the enactment of democratic values presumed that there could be no divergence between these two roles. A state that was hospitable to science and innovation would thereby also serve the public good. Vannevar Bush in 1945, at the end of a war won by massive technological investments, imagined no other possibility. To this day, progress through science and technology remains the creed of political leaders who see economic growth and job creation as two of the most durable stepping stones to success. To these has been added the politically saleable aspect of biopower: state interventions to support, prolong and save lives.
Not surprisingly, President Obama, on 9 March 2009, lifted the Bush-era restrictions on federal funding for stem cell research with just such promises: ‘But scientists believe these tiny cells may have the potential to help us understand, and possibly cure, some of our most devastating diseases and conditions. To regenerate a severed spinal cord and lift someone from a wheelchair. To spur insulin production and spare a child from a lifetime of needles.’ Obama’s rhetoric justifying federal involvement was pure Vannevar Bush: scientific judgments should be left to scientists; policy should be founded on sound science, not ideology; this demarcation of responsibilities would ‘restore science to its rightful place’, as remarked in Obama’s inaugural address, and so hasten progress.
But the evidence of the last few decades tells, as we have seen, an altogether more troubled, and troubling, story. The benefits of innovation are neither automatic nor, on a global scale, evenly distributed; costs range from physical displacement, disruption of livelihoods, destruction of communities, and economic loss to polluting accidents, illness and death. As markets expand, there is little evidence that democratic processes are keeping up with the need for producers and their state sponsors to forge links of accountability toward all those who are potentially affected by innovation. Transboundary technology transfers have opened up a vacuum of caretaking that the sociologist Ulrich Beck calls ‘organized irresponsibility’.
In conceiving high-impact projects, from dams and pipelines to standardized GM crops, states seem more sympathetic to the expansionist voices of science and industry than they are to the skepticism or distrust of their own citizens. Much of the time, answerability to more distant populations is not even on policy agendas, though such ‘others’ may figure in state and corporate imaginations as part of an amorphous, aggregated global market. If advanced technologies operate in some respects as a global constitution, at once enabling and constraining human capabilities throughout the world, it is a constitution that has yet to have its ratifying convention.7
Democracy in science and technology policy is under threat not only from the blurring of geopolitical boundaries, but from a process that we might call the ‘demotion of the demos’. Today, we observe across a wide swath of technically grounded policymaking a loss of faith in the very idea of democratic accountability. Steps in devaluing the public role include, first and foremost, moves in both science and policy to deny the possibility of public reason. Laypeople are seen (and, in a variety of science literacy surveys, shown) as being scientifically uninformed, even illiterate. At the same time, work in social psychology argues that human beings may be constitutionally impaired in their capacity to make rational judgments.
‘Rational’ is defined for this purpose as equivalent to a variety of expert and economist logics, such as the following: preventing high-probability risks before low-probability ones; weighing like-probability events alike, regardless of contexts of occurrence; accepting decisions grounded in what experts deem to be ‘sound science’; not giving in to what experts see as irrational fear. Added to these intellectually depreciating moves is the recasting of value conflicts in the framework of ethics, which carries, as we have seen, anti-democratic consequences in both procedural and substantive terms – procedurally by delegating deliberative responsibility to relatively non-transparent expert bodies; and substantively by privileging individual rights over collective and communal values.
In an era of unbridled innovation and a demotion of the demos, what can be done to stop the slide toward technocratic, top-down governance, played out once again on a global stage as in the glory days of empire? On a conceptual level, it is essential to recognize that science and technology are neither the only nor even the primary instruments that human societies possess in imagining and crafting new forms of life. A reenergizing of the legal imagination, in particular, may invigorate tired debates on science and innovation, addressing the issues of inequality, normativity, and reductionism that so often accompany the introduction of new technologies, and that cannot be fully resolved by the frameworks of market, regulation and ethics as we know them.
I will end this essay with reflections on three aspects of the law – especially of law operating in its constitutional rather than its regulatory mode – that may help restore scientific and technological innovation to the service of democratically determined ends: first, its ability to integrate the normative and the technical; second, its accessibility across cultures and social structures; and third its construction of humans not as cognitively impaired beings, but as active, rational, and capable agents.
The law’s integrative force derives from its explicit mission to bridge the is and ought. Committed to a bedrock of facts, the law nevertheless addresses humankind’s yearning to establish good ways to live with its ever-changing knowledges and material circumstances. Although the passage of time and the growing complexity of the modern world have rendered legal problems increasingly technical, fundamental questions still arise, and they allow people to articulate their hopes and fears in the face of technology’s seeming inevitability.
The life sciences and biotechnologies have raised a host of such questions: can a state block women’s access to contraceptives; is there a right to abortion; what is the moral status of the human embryo, in or out of the human body; when is prenatal testing contrary to protected human values; when does the death penalty constitute cruel and unusual punishment; are there constraints on a state’s right to carry out a census of its own people; who owns human tissues; can living things be patented; what does privacy mean in medical relationships or with respect to information derived from human DNA? Such questions have arisen before the high courts of many nations and their decisions have not always converged, showing that, through law, people may still have the power to press for culturally disparate visions of progress.
Despite real barriers of cost and expertise, and despite all-too-frequent alliances with prevailing economic ideologies, the law remains significantly more accessible as an imaginative resource than science and technology. Like fire, but unlike today’s ‘high’ technologies, law is truly a common property of humankind. No modern society lacks a legal system, and many developments in technology governance over the last few decades illustrate the central role of the legal imagination in ordering the disorder that accompanies innovation. Technologically disadvantaged societies, as well as critics of the privatization of life, have used the law to insert non-mainstream normative ideas into the governance of new technologies.
Consider, for example, indigenous knowledge rights, creative commons licenses for sharing intellectual property, labelling and tracking of genetically modified foods, prior informed consent for transboundary shipments of hazardous chemicals, compulsory licensing of essential medicines, environmental human rights, and growing legal protections for non-human species – all of which arrived from the peripheries of power to gain recognition in transnational regimes.
In other areas, legal imaginations have proved less effective, though no less subversive. A notable case was the Indian legal community’s failure to impose upon the Union Carbide Corporation the doctrines of ‘interim relief’ and ‘enterprise liability’ invented by Indian lawyers in the aftermath of the 1984 Bhopal disaster.8 Even then, that tragic episode helped lay the basis for corporate social responsibility, which has since developed as a powerful norm for controlling business behaviour.
Last, and perhaps most important, advances in public law in many countries over the last half-century have reinforced a vision of human capability that is very much at odds with the depreciating, deskilling image that permeates much discourse on risk and innovation policy. Instead of the chronically biased, informationally challenged, and technically illiterate public, who cannot understand and will not be guided by science and reason, much of administrative law assumes that people are intrinsically knowledge-able: that is, they can seek out, process, absorb, and rationally act upon information relevant to their safety and well-being. This, of course, is also the image of human capability that the market presupposes, and to that extent the law arguably only reaffirms the principles of liberal individualism. Nonetheless, this view of human nature, a product of the Enlightenment, is a useful corrective to the infantilized, ignorant figure of the Luddite who figures so prominently in dismissive accounts of popular resistance to technological change.
The democratic deficits of regulation, discussed above, do not detract from the law’s pivotal, in effect constitutional, role in constructing a reasoning, competent subject who should not be ignored in making policy. In promoting, indeed enabling, this understanding of citizens, administrative law has taken considerable pains to make the state’s workings transparent and to offer citizens the opportunity to participate meaningfully in technical decisionmaking. The presumption that citizens can do so if given a chance is built into regulatory frameworks for food and drugs, the environment, consumer products, motor vehicles, and countless other targets of innovation. And through liability and mandatory disclosure rules, the law has also stepped in to correct for deficiencies in the market that bar citizens from access to pertinent knowledge and information.
Innovation occurs in a world of inequality which it may ameliorate or exacerbate. The best hope for steering innovation toward positive ends is that it should respond to people’s self-determined needs and aspirations, provided that certain background conditions of information and deliberation are met. In short: good innovation demands good democracy; and, especially in times of change, good democracy demands an expansive, energetic, constitutive role for law.
Regrettably, global innovations in science and technology over the last few decades have not kept pace with innovations in our imagination of democracy itself. Three tried and true systems of governance brought to bear on innovation – the market, regulation, and ethics – are all associated with models of democratic participation, but each is flawed in its techniques of representation: representing the range of public views; representing all affected parties; representing people at times when they can influence innovation; and, not least, representing the very nature of the actors who need to be represented.
In wrestling with these difficulties, makers of science and technology policy have propagated two strikingly different images of the human subject. One, tacitly built into the market and regulatory frameworks, is of citizens as capable of knowing and rationally processing information. The other, born out of frustration with public resistance to new technologies, is of ignorant and helpless publics, held back from reason not only by lack of information but by systematic cognitive biases. To some degree, the removal of value debates to ethics committees rests on and reinforces this reductionist view of popular incompetence.
For citizens in the emerging global order, this state of affairs calls for reclaiming the turf of democracy by reasserting who should be served by innovation and for what purposes. I have suggested that the resources of the law can be mobilized from the bottom up, to support constitutional imaginations that are at once more human and more humane than those that emerge from the alliance of science and technology with the state. Contracts, even virtual ones like the contract between science and society, need law to enforce them. Innovative publics around the world may look to the law to reinsert themselves into a social contract from which they have been strangely excluded.
1.Joe Nocera, ‘How India Avoided a Crisis’, New York Times, 20 December 2008.
2. Paul Krugman, ‘The Market Mystique’, New York Times, 26 March 2009.
3. Vannevar Bush, Science – The Endless Frontier (1945). The full text of the report and the accompanying transmittal letter, along with the president’s request, can be seen at http://www.nsf.gov/od/lpa/nsf50/vbush1945.htm.
4. Michael Gibbons, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott, and Martin Trow, The New Production of Knowledge, Sage Publications, London, 1994.
5. For a comparison of these US-European differences in the context of biotechnology, see Sheila Jasanoff, Designs on Nature: Science and Democracy in Europe and the United States, Princeton University Press, Princeton, NJ, 2005.
6. Francis Fukuyama, The End of History and the Last Man, Free Press, New York, 1992.
7. Sheila Jasanoff, ‘In a Constitutional Moment: Science and Social Order at the Millennium’, in B. Joerges and H. Nowotny (eds.), Social Studies of Science and Technology: Looking Back, Ahead, Yearbook of the Sociology of the Sciences, Kluwer, Dordrecht, 2003, pp. 155-180.
8. Sheila Jasanoff, ‘Bhopal’s Trials of Knowledge and Ignorance’, New England Law Review 42(4), 2008, pp. 679-692.