"Interview with Langdon Winner: Autonomous Technology: Then and Still Now"

Interview with Langdon Winner: Autonomous Technology– Then and Still Now

[Published in: Autonomie und Unheimlichkeit: Jahrbuch Technikphilosophie 2020]

Herausgegeben von Dr. Alexander Friedrich, Prof. Dr. Petra Gehring, Prof. Dr. Christoph Hubig,

Dr. Andreas Kaminski und Prof. Dr. Alfred Nordmann, 6. Jahrgang 2020,

[612 The interview was conducted by way of an e-mail exchange with Alfred Nordmann between

May 5 and July 10 2019.]

NORDMANN: In 1977 you published your first book Autonomous Technology.1

[ 1 Langdon Winner: Autonomous Technology: Technics-out-of-Control as a Theme in Political

Thought, Boston 1977.]

Forty years later, autonomous technologies have become a favorite subject for philosophers,

engineers, and cultural critics. Vehicles, including drones, serve as primary exemplars,

but the self-learning algorithms of AI follow closely behind. There is a kind of morbid

fascination with cars that »decide« the philosophically popular trolley problem.

Others worry about attributions of responsibility when accidents happen and

mistakes are made or when self-learning systems develop very peculiar training effects. In

your book, »autonomous technology« appears as a matter of concern in that it refers

to »all conceptions and observations to the effect that technology is somehow out of

control by human agency«.2 You draw on Ellul to formulate what is at the same time

a philosophical challenge and a profound anxiety: »There can be no human

autonomy in the face of technical autonomy«.3

In contrast, today’s discourse appears to take technical autonomy pretty much for granted and seeks only to

manage its impact and implications. At the same time, it limits the question of technical autonomy to a

few cutting-edge technologies and does not include, for example, Charles Perrow’s

»normal accidents« or the alienation of labor in a factory setting. What do you make

of this – would you diagnose a radical disconnect between your questions back then

and today's discussions?

* * * * * * * * * * * *

LANGDON: Today’s conversations about ›autonomous technologies‹ explore themes and issues

that are both similar to and yet quite different from those in Autonomous Technology.

My primary concern back then was to find ways to pose questions about problematic

features of technology in their various modes and manifestations as they affect

modern politics. Hence, I examined a range of topics that seemed significant:

technocracy as a governance by experts; technological determinism as a way of shaping

social outcomes through the sheer force of technical change; and technological

2 Ibid., p. 15.

3 Ibid., p. 16.

295

politics as a collection of technology related conditions that tend to transform and over‐

whelm conventional political structures and practices.

The background for these inquiries was the simple fact that (at the time) neither

the varieties of political science and nor political theory had much to say about what

was clearly a powerful presence in society and politics – a rapidly growing, many

sided, highly influential, dynamic technosphere. Most perspectives on technology

were fully framed by standard notions of ›progress,‹ expecting an inevitable flow of

improvements in living conditions. Why not just go with the flow? Thinkers who did

have interesting, contrary things to say about the matter were outsiders, some the

philosophers of technology and social critics of the mid-century – Lewis Mumford,

Jacques Ellul, Herbert Marcuse, Martin Heidegger, Rachel Carson, as well as a

collection of writers on popular culture who commented upon the personality num‐

bing saturation of life by consumerism and mass media -- Vance Packard, and Betty

Friedan, for example. Also influential upon my thinking were works of science fiction

writing and film that often focused upon the loss of human autonomy to threatening

forces brought by science and technology. The underlying question in these stories

was usually: What if ...?

Writing about features of ›technology-out-of-control‹ involved a set of problems

that I hoped would arrive at a particular destination, one that emerges in the book’s

last chapter: a positive, forward looking, practical, democratic understanding of the

moral and political possibilities that technologies – new and old – present for

choices in public life. I hoped to suggest choices in the public realm and perhaps

even new possibilities for citizenship – participation in technological design and de‐

liberate choice about the configuration of limitations upon technological systems.

Such possibilities were not unthinkable at the time. The ›technology assessment‹

movement was very much involved with such prospects and even became a prominent

concern of the US Congress in the late 1960s and early 1970s.

In contrast most of today’s discussions about emerging ›autonomous technologies‹<

– self-driving cars, military drones, workplace automation, and the various

projects of so-called artificial intelligence – are predicated upon a much different project.

The attitude is to support and carefully monitor the fascinating technoscience

developments in the making. As the processes of Research & Development unfold

and reach culmination a scholar may find opportunity to comment upon interesting

properties in the workings of the various devices, systems, algorithms, and other

novelties, identifying their interesting ethnographic, philosophical and ethical features.

But the basic understanding is one fully characteristic of twentieth century techno-

think: Innovate first. Ponder the implications later.

In that light there is a consistent disposition to encourage potentially world changing

developments to unfold and to offer erudite, retrospective (but likely irrelevant)

commentaries as the fascinating prospects emerge. The possibility that an ethically

296

or politically autonomous human presence might intervene in time to make a

difference is seldom if ever on the agenda. The idea that one might announce a firm »no«

to any attractive, innovative pathway is simply out of the question. In fact, it seems a

matter of pride with today’s techno-cognoscente that the heretofore privileged

position of human beings may finally be overshadowed, even overthrown by the sheer

dynamism of various avenues in technoscience. This is quite different from the

prospective, active, critical, modes of study, reflection, judgment and political action so‐

me of us envisioned decades ago and might be explored even now.

* * * * * * * * * * * * *

NORDMANN: You refer to a fascination with the idea that humans beings might lose their

privileged position to technology. This fascination plays a major role also in your book

when you discuss Jacques Ellul, Kurt Vonnegut, E.M. Forster, Karl Marx and others

who worry about human alienation as technology takes on a life of its own, when it

assumes features and functions of life. You discuss this under the heading of techno‐

logical animism – which relates a premodern mindset to our most advanced civilization.

One might say that this in itself exposes the so-called animism as illusory (as

you show for Marx who, in the final analysis, always knows who is in charge). One

might also argue that the modern subject is profoundly unsettled and that – with all

our technology and rational control – we haven’t quite arrived in the modern world

and haven’t quite managed to assume our role as autonomous subjects. Which is it?

* * * * * * * * * * * * * * *

LANGDON: Pre-modern conceptions of animism expressed the belief that souls were widely

distributed in the world, beyond just humans but to other living creatures and per‐

haps even inanimate things. Even today there are believers in ›Panspiritism,‹ the

view that the entire world infused with spirit, filled with consciousness in various

manifestations.

As an occasional feature in representations of technology, animism appears as a

way of describing the experience of material things that seem to have taken on lifelike

qualities or to have appropriated spaces and functions that would normally be

attributed to human beings. Notions of that kind, of course, are standard themes in

science fiction writing and movies – the unsettling presence of artificial things that

seem to have taken on ›a life of their own.‹ From the rebellious robotic female in

Fritz Lang’s Metropolis to the runaway computer in The Forbin Project to the

beautiful, conniving, artificially intelligent woman in Ex Machina, images of technological

animism have long been a mainstay in popular culture. The possibility that

impressive technical devices can exhibit (or seem to exhibit) extraordinarily lifelike

characteristics is an enduring presence in modern thought. At present such possibilities

have become central, practical topics for research and development within the

algorithms of computer science as well as a wide range of projects in digital techno‐

logy and robotics. Works of that sort shed new light upon what is actually an ancient

theme.

297

It is true that surprises and troubles attributed to technologies that seem to have

become ›autonomous‹ can often be traced back to the persons and groups that are ›in

charge.‹ Marx describes the kinds of mechanical apparatus that fully claimed the

bodies and minds of factory workers in his day. As he explains such calamities in his

theory of Capital, it’s clear that the owners of the means of production bear full

responsibility for what happens. That’s a perfectly good explanation, as far as it goes.

While Jacques Ellul gives full credit to Marx for the depth and rigor of this insight,

he argues that Marx had not gone deeply enough into the varieties of subjectivity

and social formation involved in what Ellul terms ›la technique.‹ Crucially at stake

here, he argued, is the fascination with projects aimed at achieving demonstrable

improvement – more efficient, more productive, rigorously measurable outcomes in

whatever endeavor is at hand. At one point he refers to F.W. Taylor’s quest for the

›one best way‹ as a good, brief summary of mentalities and initiatives involved.

Thus, the kinds of subjects enmeshed in the arrangements of capitalist production

are also subjects deeply engaged with wide ranging projects in ›la technique.‹ The

›autonomy‹ of technique takes shape as people willingly set aside crucial commitments

that previously inspired their thinking, activity and institutional arrangements.

They embrace technical improvement as their central goal, life’s ultimate mission in

whatever domain of practice they pursue – industrial production, agriculture,

government administration, higher education, sports, sexual fulfillment, you name it.

In sum, Marx situates technology within an unfolding history of class struggle.

Ellul views the much same terrain as a story about the onset of a vast, insidious

cultural infection. In either version, what emerges is a highly unsettled way of life, one

that casts a shadow upon the prospects for what one might call the ›autonomous

subjects‹ of modernity.

How to escape the predicaments that Marx and Ellul describe in their different

ways? For me that is not merely an abstract, philosophical question. As a teacher of

budding scientific and technical professionals, I’m again and again struck by how

little sense of personal autonomy is part of today’s education, our modern ›Paideia.‹

Students hope to master the fundamentals of, say, one of the branches of engineering,

get a ›good job,‹ come up with some lucrative ›innovation‹ and live happily

ever after. Very often they simply lack any sense that they might reflect upon, talk

about, and seek to realize an independent, personal understanding of life’s possibilities.

Thus, the autonomy of technology often comes to the fore when ascertaining

people’s sense of basic priorities. But the intellectual and moral autonomy of today’s

students, employees and citizens? Not so much.

* * * * * * * * * *

NORDMANN: You reject, I take it, that technological animism and a re-enchantment of the

world issues from technological developments as such, but you attribute it rather to

a kind of feeble-mindedness or failure on the side of us technological critics. Accor‐

dingly, you go further than our Jahrbuch Technikphilosophie which is dedicated this

298

year to the topic »Autonomy and the Uncanny«. Ours is an attempt to move the

discussion of drones and autonomous vehicles beyond ethical quandaries and legal

attributions. Your book doesn’t stop there, however. While there is a chapter dedicated

to technological complexity, it is wedged between a critique of technocracy and a

call for epistemological luddism. Indeed, at the end of your book you thematize a

threat to human autonomy that arises from the simple fact that we have to live with

all our past choices in our humanly-built world: »even if one seriously wanted to

construct a different kind of technology appropriate to a different kind of life, one

would be at a loss to know how to proceed. There is no living body of knowledge, no

method of inquiry applicable to our present situation that tells us how to move any

differently from the way we already do«.4 Akin perhaps to Paul Feyerabend’s ›coun‐

ter-induction‹ you recommend epistemological luddism as a heuristic. We can assume

a free relation to technology only by questioning the unquestionable and imagining

also the destruction of our taken-for-granted technological infrastructures –

which, however, puts us at risk of being excluded from the club of so-called ›reason‐

able people‹. In the age of participatory design, responsible development, ethics on

the laboratory floor, and the co-creation of science and society, is the call for

epistemmological luddism obsolete or more important than ever?]]

* * * * * * * * * * * * * * *

LANGDON: The overall setting for my impish suggestion of ›epistemological luddism‹ is

located within ambitious calls for a substantial, even sweeping restructuring of modern

technology-centered societies as an answer to critical evaluation of the political and

environmental ills that philosophical reflection and historical examination reveal. At

the time there were a good number of proposals for the reform and reinvention of

existing technological societies from the bottom up, including those of liberal social

critics, neo-Marxist thinkers and countercultural visionaries. I mention the proposals

of Paul Goodman, Herbert Marcuse, Murray Bookchin, and others who had offered

steps toward seemingly promising programs of thoroughgoing reconstruction. Even

the American arch-technocrat Glenn T. Seaborg had recently offered the reassuring

advice, »Technology is not a juggernaut; being a human construction it can be torn

down, augmented and modified at will.<< All of this made perfectly good sense in the

realm of the imagination, but I wondered how realistic such visions were in the most

obvious, everyday sense.

Rather than sketch a utopia of my own, I laid out three or four general »useful

proposals.« I won’t summarize those ideas here. But self-critical of my own tendency

toward excess, I went on to observe that »these proposals have overtones of

utopianism and unreality, which make them less than compelling.« After some further

rumination I suggest that »One must take seriously the fact that there are already

4 Ibid., p. 328.

299

technologies occupying the available physical and social space and employing the

available resources.«

My suggestion, therefore, is to try taking some tiny, modest steps – the epistemo‐

logical luddism experiment. »The idea is that in certain instances it may be useful to

dismantle or unplug a technological system in order to create the space and opportunity

for learning.« As the device or system is removed, even if only briefly, what

jumps forth as significant? What does such learning suggest as regards any large

scale or small changes in technology related patterns of living?

I do not say it explicitly in the book, but the basic thought here was, »OK, big

shot. You’re proposing to map a thorough reconstruction of the technological society

in quest of a more favorable set of social, political and environmental patterns.

That’s excellent! But let’s start with a more modest test of concept. For a short period

of time – a week, a month or so – you and I will disconnect from a clearly cru‐

cial part of the overall techno-system and adapt our perspectives and activities to

this condition and see what problems and possibilities come into view.«

I go on to sketch some of the situations in which the experiment might be done in

a deliberate controlled way or even ones in which such opportunities arise by accident.

My basic understanding is that ›we‹ – you and I and everyone in societies similar

to our own – are completely – even hopelessly – dependent upon a whole host of

technological devices and systems, such that doing without even one of them for a

short while is an extremely taxing prospect. What does that recognition suggest

about the grand visions of technology criticism and, if I may, the intricate insights

and suggestions commonly offered by philosophers of technology?

In fact, over the years I have asked my students to do the epistemological luddism

experiment in various university classes. I ask them to identify a technology upon

which they depend in their everyday comings and goings and to disconnect from it

for just one week. They should notice what happens, and write down their experiences

so we can discuss their findings. I also ask them please not to do anything that

would affect their overall safety and wellbeing.

Items of disconnection that students have chosen include: mechanical transportation,

clock time, prepared food, artificial lighting, synthetic fabrics, and other material

features of everyday student life. The results have been fairly uniform. Most students

fail to complete the experiment altogether and come to recognize their utter

dependence upon the devices they’ve chosen. This becomes a teachable moment in

our conversations. Since many of my undergraduates are engineers, one can ask

them about the conditions of intelligibility, control, adaptability, and even addiction

that the devices themselves are making will present to eventual end users.

In a larger perspective, I often take note of instances of technological breakdown

and the lessons that might be derived from them. This is fairly tricky business be‐

cause the social patterns that emerge from somewhat similar cases are far from

300

uniform. The electrical blackout in New York City in 1965 was widely reported to have

evoked cooperative, generous responses from the populace, as people apparently felt

the need to offer aid and comfort to each other in a time of crisis. In contrast, the

1977 New York power outage resulted in widespread looting, violence and other

varieties of criminal behavior.

My sense is that an incessant series of epistemological luddism experiences will

likely characterize coming decades of climate emergency. As Earth’s biosphere and

modernity’s major technological systems enter periods of high stress and break‐

down, which varieties of understanding, which philosophies, will offer guidance and

solace? Are we – you and I and world societies overall – any better equipped to

learn from such episodes today than in earlier times?

So far the signs are not especially promising. Although there is much excited

prattle about the wonders of ›disruption‹ – »Move fast and break things,« as they say

in Silicon Valley – the prevailing worldview is still deeply rooted in beliefs about a

stable, slowly unfolding, ultimately benevolent continuity. In my view, that’s an

existential condition to which humanity is no longer entitled.

The Democratic Shaping of Technology: Its Rise, Fall and Possible Rebirth

 

 

The Democratic Shaping of Technology: Its Rise, Fall and Possible Rebirth  

By: Langdon Winner

[A talk delivered online for the Society for Social Studies of Science, August 2020]

I want to offer my sincere thanks to the John Desmond Bernal Prize committee and to the Society for Social Studies of Science as a whole for the expression of recognition and support the award conveys. 

I'm especially pleased to be honored along with Sharon Traweek whose contributions I greatly admire.

What I'd like to do today is to trace a shadow from the past, a shadow that falls upon troubles of today, notably in my own country. It's a long story that I'll have to compact into a short space.

The starting point is the late 1960s and the rise of a new political movement with some interesting features. 

Of course, that decade was famous for the rise movements of many kinds -- the civil rights movement, environmental movement, anti-Vietnam war movement, the counter culture, and others.

But there was another movement, a kind of insurgency, less disruptive in the streets, less prominent in the headlines, but one that promised to be highly consequential in the  long run.  Its most common name was “Technology Assessment.”

Among the locations in which the project took shape were two American organizations not usually thought of as hotbeds of radicalism -- The National Academy of Sciences and National Academy of Engineering.

Ambitious reports scoping out the need for and possible methods of "Technology Assessment" were published by the two academies in 1969.

 Both argued that there was an urgent need to recognize that new science-based technologies would have profound influence upon shape of society's future -- its basic institutions and practices.  Changes of this kind involved both the promise of positive change but also occasional prospects of risk and danger. 

 For that reason, it made sense to study such projects and prospects thoroughly and perhaps to steer the trajectories of social change -- the institutions, practices and basic principles their formidable presence would involve.

Both Academies went further to argue for the need to create institutions and practices to study and scope out the various prospects, making well informed recommendations about how sweeping socio-technical change should be shaped and managed.

 In fact, by the late 1960s interest in projects of this kind was also a living concern in the U.S. Congress, then largely controlled by the Democratic Party. 

A key player was Representative Eugene Deddario, Democrat from Connecticut.  Among academics, physicist Harvey Brooks of Harvard University was a prominent mover and shaker. 

Finally, in 1972 the House of Representatives created a new policy unit, the Office of Technology Assessment whose purpose was to engage in research, analysis, and scoping of alternatives about emerging science based technology, work that would help advise Congress on matters of funding, new regulation and the like.

In my view, the creation of the OTA was a logical step in the unfolding of New Deal social and political liberalism that had emerged in the 1930s as shaped by President Franklin Delano Roosevelt.  

This approach to public priorities can be regarded as a softer, more modest version of the institutions of social democracy taking shape in Europe during the mid 20th century.  Obviously, its ambitions were far less radical than the programs of  New Left politics erupting in the U.S. during the roughly same period.  Nevertheless, the basic focus of technology assessment expressed a truly urgent, enduring question.  Could emerging science based technologies be shaped and steered in ways that enhanced future social, economic and political patterns compatible with the common good, while avoiding the risks and dangers that science based technologies sometimes involve?

After its founding, the Office of Technology Assessment went to work with small staff and modest budget.  Its primary mission was to provide advice to Congress on policy issues that involved science-based technologies broadly considered.

Over the next two decades the OTA sponsored and helped organize research and deliberation across a wide range topics.  It supported hundreds of studies, some conducted by its rather small staff, others by academics in universities and research organizations.

During roughly the same period of time one sees the continuing rise of academic  research, thinking and teaching in the social sciences and humanities  -- the various fields of STS. 

Of course, that is a long and fascinating story in itself with the founding of 4S as a major focus of interaction within new hybrid disciplines. 

In that light the existence of the OTA and its widespread projects served as evidence that the rise of science and technology studies was not merely of academic significance.   Many believed that the institutions of American government – federal, state and local – would be responsive to the new agendas of research and practice in STS.

As a scholar moving out from political science and interested in the political features of technologies of various kinds, I watched the rise of technology assessment as an intellectual and policy movement and its occasional connection to the growing field of STS.  These were truly exciting developments!

As a political radical I took note of the rather conventional, technocratic disposition of technology assessment and wondered how it might be shaped in ways more open to democratic voices and proposals.  On occasion I also suggested that the writings of thoughtful of contemporary technological societies – Herbert Marcuse, Jacques Ellul, Lewis Mumford, and others – ought to be prominently featured in the conversation.

Rather than leave the study, speculation and proposal making to Congress people and technical professionals, why not open it up more broadly?  Why not open it to a democratic citizenry as a whole?

A good many others shared these concerns. 

But over the years of its existence the OTA remained a modest enterprise, conducting  research and sponsoring inquiries on -- computing, energy, transportation, technologies of industrial production and the like -- publishing a steady stream of studies that were basically intended as information and advice for policy making in Congress.  One can say the horizons of the Office of Technology Assessment were ultimately constrained by its attachment to its legislatively determined role.

In that light, proposals from friendly outsiders that the OTA open itself to the participation of non-experts, to the views and voices of ordinary citizens never made much headway.  The goal of moving from representative democracy to genuinely participatory democracy in this arena was forever frustrated.

Over the years of its existence, the OTA  sponsored a continuing sequence of meetings in which scholars in relevant fields of research and thinking would come together to speculate about the dimensions of problems and possibilities in emerging technologies.  

I attended a couple of such gatherings and took note their basic format and dynamics. 

Looking back on it now, there was a feature of these meetings that seemed rather insignificant at the time but which foreshadowed major, unfortunate changes on the longer term horizon.

Sitting within an inner rectangle of tables were well known researchers, often university scholars or think tank experts who would take turns discussing  the significant topics at hand.  At one meeting I attended in Washington we spent the day discussing the privacy of data collected by increasing powerful computers and the extent to which everyday people would have control over the information stored in government and corporate mainframes. 

At symposia of this sort there was also a large surrounding rectangle of observers in chairs who'd listen in and occasionally join the conversation.  These were often people from business firms or other organizations with a financial or particular policy interest in the topics under discussion.   Their presence strongly suggested that research and deliberation in technology assessment needed to be carefully watched and probably constrained by the priorities of real world business operations.      

The attitude of people in the outer rectangle was consistently that, yes, technology assessment is all well and good, but the Congress should not seek to contradict or limit the emerging plans and projects of money-making enterprises.  At least that is my recollection of the general mood of the discussions from the outer ring at these OTA gatherings.

Well, within its modest resources and limited framework for participation, the OTA survived into the early 1990s.  Over the years the organization published hundreds of technology assessment reports, many of which are still available online well worth reading today.

Some of us who were sympathetic with the basic purposes and methods of the organization continued arguing that its mission ought to be expanded to include the involvement of ordinary citizens in activities of research, debate and advice giving.

For example, the creation of the Loka Institute by Richard Sclove, a small organization of which I was a member, sought to supplement the OTA with independent forums for  citizen participation.   Efforts in much the same vein took shape in numerous conferences on Participation in Design over the years.  Even today we find concerns of this sort expressed in Doug Schuler’s efforts to organize in experiments in Civic Intelligence. 

Alas, powerful influences of a much different complexion took shape in the late 1970s and 1980s with the rise of Reaganism and neoliberalism, approaches to economics, society and politics that promoted the dominance of markets and so called “free enterprise”  in technology-centered developments. 

In that vision, of course, the idea that an open, democratic politics of technology assessment – within the Congress or among everyday citizens more broadly -- was often dismissed as an unwelcome intrusion upon the workings of American capitalism. 

“Let the market work,” was the general norm.  The idea of markets was (and is), of course, effectively embodied in the machinations of large corporations, global banks, brokerage houses, and hedge funds.

The conservative radicalism of this position was finally realized as one of measures advanced by the Republican Party following its Congressional sweep in the election of 1994.  As Newt Gingrich took power as Speaker of the House, literally one of his very first steps was to end funding for the Office of Technology Assessment altogether, a step that brought its final and perhaps permanent abolition in the U.S. 

This turn of events and is regrettable not because the OTA was hugely powerful and influential during its years of operation.  In fact, the organization was never especially consequential.

The significance of the little organization lies in the kinds of horizons for engagement in public affairs its very existence brought to light.  It was those horizons that Newt Gingrich and his Republican constituency were eager to snuff out before they spread any further.

It's worth noting that several kindred organizations for Technology Assessment have survived and flourished in several European countries.  My hypothesis would be that nations grounded in more robust, responsive frameworks of social democracy are, as a general matter, ones that afford the kinds civic engagement technology assessment requires.

In contrast, attempts to expand the American framework of New Deal liberalism proved to be too weak for bold new initiatives to succeed.  Much the same limitations are obvious in the failure of the Democratic Party to create a system of single payer health insurance for all Americans while most other developed nations offer healthcare of that kind as a routine matter. 

One significance of the utter destruction of the OTA  was that it prefigured the economic and political forces characteristic of the toxic American technological, economic, social, and political configurations that have taken shape during the past three decades.

These patterns include ghastly concentrations of wealth in the hands of a very small portion of the nation’s populace along with staggering levels of economic, social and political inequality in the populace characterized by stagnant wage levels and the effective cessation of earlier patterns of upward social mobility.

In my view, such tendencies are glaringly evident in the forms of monopoly power that have taken shape in today's technology centered industries, notable in the rise of the great digital platforms of Google, Amazon and Facebook.

To this I would add, although you may disagree, that to a great extent these trends are promoted, praised and justified by what has become a dominant ideology among much of the populace, including much of the scholarly community, an ideology that I like to call The Cult of Innovation.

Within that mentality innovation has become the central goal and obsession, something that our students revere and that we in fields of STS often reinforce in our writings and teaching.  Thus, we often describe and interpret the cultural features of innovative technoscience, but less frequently criticize its basic rationale.

By the same token, it's worth remembering perhaps as a lovely moment of nostalgia, that many STS programs in higher education were originally justified for the ways they upheld the teaching of professional ethics and the cultivation of a sense of public responsibility among science and engineering students. 

Along with the promise of their programs of research, that was often how programs of STS were originally promoted in American colleges and universities. 

To a considerable extent, in my view, such ideas have recently faded, replaced by educational objectives more compatible with the conservative, neoliberal, high tech persuasions that have risen to widespread prominence with Silicon Valley luminaries as cherished models to emulate.

To a great extent that project of teaching professional ethics and commitment to the common good has been supplanted by the promotion of visions of excitement, wealth and glory centering upon notions of "innovation." 

During the first session in one of my classes a couple of years back we went around the room to get a sense of how the students, mainly engineering majors saw their education.  One of them proudly announced, "I want to be the world's first trillionaire."

I thought to myself, "Oh, God, I'm in the wrong business!"

At the level of scholarly research and publishing in STS and adjacent fields it seems to me that much of the work in recent years has focused upon uncovering what are ultimately oppressive patterns within today's high tech giants. 

The result is a growing mountain of books and articles that explain what has happened but which leave the human community powerless to do much of anything about it.

An example is Shoshana Zuboff's book Surveillance Capitalism, an excellent description and analysis of the kinds oppressive power resident in today's enormous digital platforms. 

While I recognize the brilliance and relevance of studies of this kind, I wonder about the underlying mood of passivity they sometimes express.  Zuboff, for example, worked for well over a decade studying the emergence of the kinds of surveillance she strongly denounces at her book’s conclusion. 

It’s worth asking: why didn't we hear any warnings from her while there might still have been opportunities to challenge and seek to limit these ghastly aggregations of power?  

My fantasy is that years ago Zuboff might have rented one of those sound truck with loudspeakers in the roof and begun driving through the streets of Boston and Cambridge warning the citizenry:  "Listen people!  The digital networks you're using are stealing and marketing most intimate details of your life.  Rise up now!  Organize to stop them!"

Of course, that was not going to happen.  The warning was delivered only when the book came out.

The dominant strategy in discourse of this kind is to warn the public of significant dangers and problems after they’ve become firmly rooted, patterns likely beyond any conceivable remedy.  As Hegel observed in The Philosophy of Right, “The owl of Minerva, takes its flight only when the shades of night are gathering.”  In our time the old bird would probably be shot on sight. 

I mentioned at the beginning of this talk that I’d be tracing a dark shadow that falls on the present moment.

Within the narrow historical perspective sketched here, the shadow involves the eventual elimination of the vision, possibilities and practices clustered under the term technology assessment. 

In the larger view, however, what matters now are tendencies in the American Republic to set aside activities and institutions of systematic, intelligent science-based public policy making in favor of the perspectives concerned with money-making and very little beyond.

Of course, this is fully evident in the response of Donald Trump and his inner circle during the crisis presented by Covid-19.   Faced with that terrible situation, Trump has staunchly refused to consider the alternatives proposed by experts in public health and relevant domains of scientific knowledge about mico-organisms, vaccines, and social policies that might be used to quell the spread of the pandemic. 

Trump’s focus and that of his favored advisors were strictly attuned to quack remedies and to prospects for reviving the economy by reopening crucial money-making sectors. 

The shocking consequences of this rejection of policy wisdom in domains of contemporary science and technology can now be measured in hundreds of thousands of deaths, tens of millions jobs lost and the destruction of countless domains of social and economic vitality.

These consequences do not stem solely from Trump’s ignorant, obstinate personality, but also from a view of alternatives that emerge from the distinctive goals of neoliberal capitalism, ones that set aside attempts to fashion well-grounded public policies in favor of money seeking schemes.

That is why I regard the abrupt destruction of the modest little OTA in 1995 as a precursor of policy maladies and social disasters evident in the Covid-19 emergency and the nation’s inability to chart an intelligent, coherent response.  The demise of the OTA was a little tremor that foreshadowed the massive political earthquakes that rock in America today.

I regret that my description of what was originally a highly promising development – technology assessment – has taken such a dreary turn. Hence, I feel obligated to conclude with a suggestion for a more hopeful path.

Looking forward to election 2020 one modest but promising, practical proposal would be for the Democratic Party to revive and strengthen the Office of Technology Assessment.  In its new life this might include not just elected Congress persons and their professional staff, but also participatory citizens councils for each of the 435 Congressional districts, venues where everyday folks – perhaps selected at random -- could engage in research and organize public debates about the horizons of present and future technological and social change.

While activity of this kind would not by itself eliminate the kinds of oligarchy

and distain for democratically responsible policy making characteristic of Trump and his gang, it might help awaken a sense that there are fruitful alternatives to the dreary forms of public life we experience in America today.

I realize that my observations today will probably provoke a good number of questions.

Alas, as far as I know, these pre-recorded remarks do not enable such interaction. 

But I want to thank you for listening and look forward to our future conversations.

Technological Investigations: Wittgenstein's Liberating Presence

Technological Investigations: Wittgenstein’s Liberating Presence

Langdon Winner

Abstract: Although Ludwig Wittgenstein did not offer a fully developed philosophy of technology, his writings contain an approach to inquiry that can be employed to explore situations in which people contend with technological devices and systems. His notions of ‘language games’ and ‘forms of life’ as well as the dramatic, imaginary dialogues in his later writings offer ways to transcend the sometimes rigid theoretical frameworks in contemporary technology studies. Especially as applied to rapidly moving infusions of computing and digital electronics in contemporary society, Wittgenstein’s writings offer possibilities for fresh insight and even some practical alternatives.

Techné: Research in Philosophy and Technology ISSN: 1091-8264 22:2 (2018): 296–313

DOI: 10.5840/techne2018111485

Key words: Wittgenstein, language games, technology, digital, political theory

Introduction

Early in his Philosophical Investigations Wittgenstein describes the interactions between two persons, A and B, perhaps a builder and an assistant, situated at a worksite moving blocks or ‘slabs’ of stone. He writes:

If you shout ‘Slab!’ you really mean: ‘Bring me a slab’—But how do you do this: how do you mean that while you say ‘Slab!’ Do you say the un- shortened sentence to yourself? And why should I translate the call ‘Slab!’ into a different expression in order to say what someone means by it? And if they mean the same thing—why should I not say: ‘When he says ‘Slab’ he means ‘Slab’’? Again, if you can mean ‘Bring me the slab,’ why should you not be able to mean ‘Slab!’?—But when I call ‘Slab!’ then what I want is that he should bring me a slab! (Wittgenstein 1958, 8e–9e)

This is one of my favorite passages in all of philosophical literature, a provocative contribution I would put right up there with, say, the Allegory of the Cave in Pla- to’s Republic or Rousseau’s Discourse on the Arts and Sciences. It first attracted my attention in an undergraduate political theory seminar in Berkeley during the mid-1960s where a group of us read and discussed Wittgenstein’s Philosophical Investigations. Soon thereafter the ‘Bring me a slab,’ passage became a dialog we’d spontaneously recite in coffee shops and apartments around town where it would come to life as if it were a comic scene as if from a piece of absurdist drama—perhaps something from or Harold Pinter, Samuel Beckett, or Edward Albee.

“Slab!”

“Do you mean bring me a slab?”

“Yes! And while you’re at it, how about a cup of espresso!” “Espresso?”

“Of course. Espresso! When I say ‘Espresso!’ I mean ‘Bring me an Espresso!’”

Now, I am definitely not a Wittgenstein scholar, although I have always enjoyed reading his work. Mainstream of philosophical writings that draw upon and con- tend with Wittgenstein’s thought are, in my view, typically engaged in highly focused, rigorous argument to clarify, refine, expand upon or contradict points in his writing. My own use of his work, however, follows a somewhat different pathway. For along with its other contributions Wittgenstein’s writing can be fruit- fully deployed in situations where elaborate, fixed, sometimes arcane frameworks of conceptual analysis and social science theory tend to pose a barrier to curiosity and to fruitful inquiry about the subject at hand. Following the style and spirit of his later writings, one enjoys the possibility of disruption and liberation.

1. Stultifying Frameworks

An occasion of that kind arose as I was studying kind of political science taught in U.S. graduate schools of the 1960s and 1970s. In its worst moments (and there were a great many of them) the mode of discourse and research was a weary and sometimes explicit echo of the school of logical positivism that arose in Wittgenstein’s Vienna. Using largely abstract categories to anchor their speculation, scholars built logical structures of propositions to depict patterns of social and political behavior. Upon that basis, serious inquirers were supposed to move forward to conduct ‘empirical research’ to test the theories proposed. Thus, roughly speak- ing, the kinds of philosophical thought that Wittgenstein’s later writings sought to challenge, undermine and replace bore a strong ‘family relationship’ to positivist, behavioral social science of the post-World War II decades. Logical propositions in social science theories identified states of affairs to be investigated by rigor- ous, often quantifiable, observations of socio-economic structures and patterns of political behavior.

At the time many young political scientists in the making came to believe that the prevailing conceptual and theoretical frameworks for the study of politics were flawed, inadequate at a fundamental level. Looking at prospects for democracy, for example, the discipline would describe, measure and theorize about pluralistic structures—interest group formations and interactions and the like. But, strangely enough, such elaborate, precise models recognized no living presence for the activities and experiences of citizenship in what were ostensibly democratic societies. Yes, there were interest groups interacting, elaborately scripted elections, and labyrinthine bureaucratic fixtures of public administration. But aspirations to achieve conceptual and methodological rigor within the science of politics had ultimately produced a kind of rigor mortis. The intellectual frameworks that the discipline so scrupulously mapped turned out to be lifeless structures.

2. Wittgenstein and Political Theory

At Berkeley at the time a key person who offered an alternative to the dominant social science models was political theorist Hannah Pitkin. Her seminars and book Wittgenstein and Justice offered ways to engage Wittgenstein’s thinking to enliven central questions in political thought. While the details of Pitkin’s approach are too elaborate to summarize here, a general sense of her enterprise is clearly expressed in this sentence from the book:

The meaning of ‘justice’ is not—or not primarily learned by observing the shared characteristics of those phenomena called ‘just’ but by observing the shared features of speech situations in which the family of words is used, their verbal and worldly contexts. (Pitkin 1972, 179)

While much of Pitkin’s work involves close examination of the language of political theory and of important strands of contemporary political discourse, one of the book’s crucial illustrations comes from the dramatic dialogue between Socrates and Thrasymachus in Plato’s Republic on the question, ‘What is justice?’ She writes:

Socrates speaks from within the framework of what is supposed to be true of phenomena called ‘just,’ namely that they must involve having and do- ing what is appropriate to him. He accepts the intention, the conventions, of the word at face value, and reaffirms them. Thrasymachus rejects these, or ignores them, and looks independently on his own at the common features of phenomena other people call ‘just.’ (Pitkin 1972, 170)

Of course, as Pitkin notes, Thrasymachus’s view that ‘justice’ is nothing more than ‘the interest of the stronger’ is outrageous, precisely because it reflects an understanding contrary to what the word clearly means in everyday speech and what such meanings reveal. “He is not formulating a phrase more or less synonymous with the word ‘justice’ but making a kind of sociological observation about the kinds of things which people call ‘just’ or ‘unjust.’” (Pitkin 1972, 170) She argues that careful inquiries into the use of crucial terms in moral and political discourse—both in ancient Greece and today—need to take into account how they are sometimes used in misleading, befuddled, hypocritical, mendacious, and otherwise highly problematic patterns of expression. In that light, concepts and questions that involve freedom, obligation, power, community, citizenship, government and the like can be fruitfully studied with the help of the later Wittgenstein’s understanding of language and its attention to the colorful peculiarities of everyday speech.

For better or worse, in insight or ignorance, what I took from Wittgenstein and Pitkin was a way to break through elaborate, weighty intellectual frameworks that are perhaps more a hindrance than help in moving forward with one’s inquiries. An obvious point of contrast in political theory was John Rawl’s A Theory of Justice, a very fine piece of work in many ways, but one that ultimately seeks its insights through creating and applying an intricate, abstract framework of concepts—the ‘veil of ignorance’ and such like to illuminate its topic. The friendly counsel I took from Wittgenstein via Pitkin was that rather than theorize about important questions and concepts—justice, liberty, community, authority, representation, and so forth—in an abstract, logical, detached, top/down manner, one could follow Wittgenstein’s advice to observe what people are doing, listen to what they are saying and launch one’s philosophical and theoretical inquiries from there. The playful, imaginary, sometimes even bizarre dialogues in his writing shed light on his thinking on language games, language domains, the grammar of words in actual use and their role in recurring patterns of activity—‘forms of life.’ This orientation in political theory involves its own kinds of diligence and rigor, especially in the study of language used to talk about social and political experience. But its hallmark involves attending carefully to how words are used in everyday speech rather than trying to impose meanings from an elevated, privi- leged, well buttressed position in quest of an exquisite clarity. Very briefly, that is the approach I recommend here.

3. Technologies and Everyday Language

At about the same time I began to appropriate Wittgenstein to loosen and eventually jettison the bonds of political science positivism, my thinking began to focus upon questions about technology in human affairs. I had the strong intuition that technologies not only reflected political dynamics, but that the systems, devices and ways of thinking within the realm of technology actually contained the stuff of politics in palpable, forceful, meaningful ways. The problem was, however, that if one were to embark upon that kind of study, the existing literature and preva- lent modes of thinking at the time were of very little help. Yes, there was a large body of scholarship on the invention and development of new devices and systems processes of technological change, history of industrial society and the growth of economic prosperity. In short, most of the description and analysis directed one along well-worn paths of the classic progress narrative—stories about the grand and glorious march of material and social improvement.

Turning to philosophical discussions, much of the writing at the time depicted technologies as tools and that were fundamentally ‘neutral.’ The discussion often centered upon judgments about ‘use.’ Were the available machines and technical systems used well or poorly? Which criteria and which processes of judgment could be deployed in judging and shaping these essentially neutral technological applications? A typical example in this vein was that the same nuclear technology that had produced the possibilities for mass destruction in the atomic bomb could also ‘used’ to dig new irrigation channels in the mode of President Eisenhower’s Atoms for Peace program. Although notion of ‘use,’ technical ‘neutrality,’ ‘sys- tems analysis,’ and the like were entirely serviceable at some level, they imposed severe limits upon the kinds of imagination that might illuminate inquiries about technology in human affairs. Thus, as regards the power released from splitting atoms, discussions about ‘use’ tended to obscure rather than illuminate the most urgent questions facing the community of nations (Mumford 1946).

One pathway out of this realm of intellectual strictures was to embrace radical thinkers—men and women—who cast doubt upon the prevailing wisdom about technology—Lewis Mumford, Jacques Ellul, Herbert Marcuse, Hannah Arendt, Ivan Illich, Rachel Carson, Paul Goodman, and even Mary Shelley. That route led to my writing about Autonomous Technology, a variety of notions about Technics- out-of-Control. I asked: What are the interesting and troubling issues within that genre and how can one begin to explore them? What is revealed in widespread reports about technology run amok in science fiction, Hollywood movies and certain domains of social thought?

At the same time I was beginning to move along a parallel path, which in retrospect, seems that of what might be regarded as Wittgensteinian ‘technological investigations.’ Of course, Wittgenstein himself offers only fleeting commentaries on technology as such, which are usually marshaled in service of particular argu- ments about meaning, knowledge, judgment, and other fundamental questions. In the Philosophical Investigations he mentions machinery, locomotives, clocks, lamp, microscope, cogwheel, and dozens of other useful objects. His remarks on these matters do not offer an ambitious philosophy of technology in the man- ner that Marx, Heidegger, Marcuse, Mumford, Ellul and others have produced, although it is possible than an enterprising thinker might leverage Wittgenstein’s writings to produce a comprehensive vision of that kind. What Wittgenstein does suggest to scholars studying technology and social life, however, is a highly fruit- ful comparison of tools and words. Just before the section that contains his discus- sion of ‘Bring me a slab,’ he observes:

Think of the tools in a tool box: there is a hammer, pliers, a saw, a screw- driver, a rule, a glue pot, glue, nails and screws.—The functions of words are as diverse as the functions of these objects. (Wittgenstein 1958, 6e)

Somewhat later in the book he takes note of the fact that language is not fixed and stable, but continually involved in change—“new types of language, new language-games, as we may say, come into existence and others become obsolete and get forgotten” (Wittgenstein 1958, 11e). By implication, the items in toolbox of language could potentially be seen as similar to the artifacts in the technological realm where new devices and techniques continually appear while others fall into disuse and lose their practical and cultural significance.

A relevant illustration here is the relatively rapid turnover in techniques of sound recording and reproduction during the past century—from 78-rpm records, to small 45-rpm disks, to 33-rpm vinyl LPs, reel-to-reel tapes, eight-track tapes, cassette tapes, compact disks, today’s online music streaming in Spotify, Pandora, YouTube, and other ‘platforms’ on the near and distant horizon. For each of these technical formats there are distinctive ways of talking about the devices, their common mode of operation, problems they present to users, and the music itself.

There are also colorful metaphors that spread from the techniques of musical re- cording into everyday speech society at large. Very often the commonplace terms, metaphors and ‘language games’ from one period of time often do not transfer to later ones. Today if one used the once common phrase ‘It’s in the groove,’ most people younger than a certain age simply would not know the specific technical context of reference and possibly not even understand what the term means in a metaphorical sense.

I recall an argument I had with the noted jazz critic Ralph J. Gleason, a men- tor for many of us in the early days of Rolling Stone Magazine. Bemoaning the excessive power the record companies were gaining at the time, I argued that these behemoths could sell just about any piece of musical trash to LP-buying audiences. Gleason replied, “No, not at all. The hits are in the grooves!” In other words, if the musicians were making music of genuine quality, it would be present right there in the small circles on the physical vinyl disc—the grooves—spinning on one’s turntable, regardless of any subsequent manipulations by corporate advertisers and promoters. While Ralph and I ultimately did not agree on the economic issue we were debating, his metaphor shed light on a key point of contention.

An early opportunity to employ my barefoot Wittgensteinian perspective in technology studies came in a research group headed by political scientist Todd La Porte whose field of study was organization theory, and whose questions cen- tered upon large scale socio-technical systems, especially those that involve serious risks and absolutely ‘must not fail.’ To further research of this kind, we set out to study ‘complexity.’ The basic idea was that one needed to develop elaborate theoretical models of complex systems and then study them through careful observation—once again, the positivist, behavioral social science method at work.

At a certain point I decided that, as my part of the project, rather than think and write in a manner that would abstractly and analytically stipulate what complexity means, I would just listen to and informally observe what people say when confronted with complex phenomena. I did not explicitly say to myself, “I’m shad- owing Wittgenstein here,” but looking back, that’s pretty much what I was doing. Eventually I came to conclude that people often use the word ‘complex’ within a language game, as it were, the purpose of which was to offer an excuse, an apology for stopping the conversation. People would say, “That’s a very complex question.” Or “That’s too complex to go into now. I’ll come back to it later.” In contrast, my colleagues on the project assumed that ‘complexity’ was the name, a noun, for an extensive set of observable socio-technical configurations. The word suggested a need to begin modeling and explaining systems with numerous parts and pieces and a wide variety of interconnections. That was the path to clarity and understanding. Their hope was to advance a project within social science focused upon rigorous studies of ‘organized social complexity.’

I countered that one might also notice that ‘complex’ is an adjective that signals a psychological state that often shuts down discussion and inquiry—complexity as perplexity. Perhaps the relevant ‘form of life’ involved could be called ‘coping’—dealing with the everyday experience of large, complicated, artificial, man/machine systems. One could begin to understand such predicaments by noticing the language games and grammars widely associated with them. My contribution to the collection of articles, Organized Social Complexity, explored ideas along those lines (La Porte 1975). The essay was written (I’m almost embarrassed to say) using a string of numbered paragraphs in the style of Wittgenstein’s Philosophical Investigations (Winner 1975).

I take no credit for it, but eventually this dimension of complexity as the psychological experience of perplexity did emerge prominently in social scientific and technical studies of complex risky systems. For example, in his report on the Three Mile Island nuclear power plant accident, sociologist Charles Perrow calls attention to the debilitating confusion spawned by devices meant to warn workers of trouble in the system—several loud buzzers going off, rows of flashing lights and other visual displays that gave readings that said basically, “Oh, oh. We’re in a lot of trouble now,” all of which forcefully presented the first task for the technicians in charge, namely how to shut down those damned sonic and visual alarms so the technicians might begin to focus upon how to address the emergency at hand. Reports of this kind emerged in Perrow’s interviews over the years with workers at various sites where technological and organizational breakdowns had occurred. In his book Normal Accidents a prominent piece of advice is to design socio-technical systems in ways that anticipate and seek to minimize the experiences of anxiety and panic as accidents emerge (Perrow 1984).

In this light, a helpful Wittgensteinian inquiry applied to technology can begin by paying careful attention to what people are saying and when they say it, which language games are in play and which patterns of activity—forms of life—are involved (Winner 1986). This is an alternative to assiduously laboring to impose an elaborate theoretical framework to shape and constrain one’s inquiries. As a comment in one of Wittgenstein’s notebooks concludes, “The method of philosophy is to listen to all voices” (Wittgenstein 1995, 87).

4. Google Mind and Phaedrus

Over the years I have used my probably naïve Wittgensteinian approach to explore ‘technologies as forms of life’ within variety of technology related topics educational technology, engineering ethics, and similar matters. A recent excursion of this kind played with the language games characteristic of today’s obsession with ‘innovation’ (Winner 2017). What are people talking about as they go on and on about ‘innovation?’

I’ve also found this approach useful in my interactions with professors of various branches of engineering and with students preparing for careers as technical professionals. In a program on Design, Innovation and Society where I some- times teach, I occasionally employ Wittgensteinian rhetorical moves to disrupt a particular understanding of the sources of creativity—the view that inventors and designers are a special breed of person equipped with a certain idiosyncratic brilliance that the vocation of design can cultivate and eventually realize in practice. One can compare this belief to the ancient Greek myth that Athena was born when she popped out of the skull of Zeus fully formed and ready for action. As regards their own creativity, my students and colleagues in engineering and disciplines of design often embrace something close to this vision—the idea of Athena springing full born from the godhead. From that point of view the basic tendency is to go more deeply inside oneself to discover the magic of creativity within one’s own skull.

In contrast, I suggest that a project in design might begin by observing a site or situation, noticing its features, equipment, social interactions, and problems. Notice what people in a particular situation are doing and, especially, listen to what they are saying, the distinctive grammar, the language games, in their conversations. Upon that basis one can begin speculating about what some key problems are and which improvements or inventions might be offered in response. Among other contributions, this approach can help dispel an unfortunate tendency among some designers and architects—a penchant for self-absorption that distracts them from noticing the textures of everyday human activity relevant to their projects. To some extent what I’m advocating is similar to those who practice user-centered, ethnographic methods in design.

As regards thinking about technologies and forms of life in the present moment, a very large, significant domain of interesting developments in the sphere of digital devices and systems, the Internet, social media and the like. This is a zone of activity and expression ripe for a great many Wittgensteinian technological investigations, ones akin to the ethnographic approaches sometimes used in technology studies these days. The inquirer proceeds by noticing what people are saying and claiming about the technical devices they develop and put to use, which problems arise in these interactions, and so forth.

A significant example today involves people’s involvement with digital net- works and the basic software that shapes everyday patterns of individual and social interactions. Imagine that you are in a classroom or in a conversation on campus or around town, and a fairly difficult question comes up. A common first response is—“Let’s Google it!” Out comes the laptop, tablet or smart phone and the search engine goes to work. College students are especially adept at this and always seem pleased when an answer suddenly appears. Of course, there is a vocabulary and grammar associated with this practice, terms such as ‘clickthrough,’ ‘keyword,’ ‘auto-tagging,’ ‘exact match,’ ‘quality score,’ and so forth. Perhaps my judgment is unfair, but it appears that a good many people (especially young people) have grown utterly dependent on the computer and Google’s fabulous algorithms.

If I ask my students to offer a thoughtful comment upon a piece of reading assigned for class that day, they often feel the need to consult their laptops to find the answer. If I ask: “What comes to mind? What does that passage say to you?” A good number of them simply tend to blank out. It appears that to some extent memory, thinking, imagination, and conversation have been replaced, perhaps even crippled, by excessive reliance up the search engine. Of course, for an educator, it’s distressing. An appropriate name for this phenomenon might be ‘Google Mind.’ The forms of inquiry fundamental in an education are supplanted by a relationship to powerful algorithms and knowledge on the Net. In my worst moments, I fear that if I were to ask, “What is 7 times 9 on the multiplication table?” the answer would likely be, “Wait a minute. Let me Google that for you.”

The problem and dilemma here is far from new. In fact, it mirrors the ancient controversy in Plato’s dialog Pheadrus where Socrates offers a stinging critique of the uses of the invention and use of a practical art—that of writing. Summarizing a story from an old Egyptian legend, he argues: “If men learn this [that is, reliance on writing], it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but from external marks.” Continuing with the story, Socrates insists that a teacher’s disciples who learn from the written word “seem to know much, while for the most part they know nothing, and as men filled, not with wisdom but with the conceit of wisdom, will be a burden to their fellows” (Plato 1963, 275 a–b).

Awareness of a possible growing dependence of Internet users upon the Google search engine and how it affects memory has recently emerged as a topic for psychological research. A summary of findings in such four studies concludes that “when faced with difficult questions, people are primed to think about com- puters and that when people expect to have access to information, they have lower rates of recall of the information itself and enhanced recall for where to access it” (Sparrow et al. 2011). Some of my own misgivings about technology and social life at present concern impressions of a wider authority that spring from digital systems. At a deeper level young people may be learning that, beyond the specific marvels of Google, Big Data is the real magic and that The Cloud holds the key to all knowledge. A number of university educational and research programs now seem to have that premise at their very core. One might ask: Does the Cloud have politics? If so, could it be an emerging politics of passivity, a ‘form of life,’ in which power resides deep with the network of servers but remains essentially inaccessible to those trying to leverage its practical advantage?

5. Forms of Life and Digital Devices

Another interesting site for Wittgensteinian technological investigations could be the growing presence and use of the smartphone in today’s society. Through the wonders of microelectronics one has the power of a camera, powerful computer, music player, video player, web browser, GPS system, and much more. Increas- ingly prominent in directing such power are the applications or ‘apps,’ many of which seek to reorganize and recode everyday practices that used to be done in different ways in different places. On the list of everyday activities involved in reshaping of this kind one finds making agreements, negotiating social arrange- ments, looking for work, evaluating people, products, and services, solving small and larger practical problems, and hundreds of other undertakings configured on the Net. “Oh, there’s an app for that,” people often say, almost without regard to the need at hand.

As one example, there are now a good many ingenious babysitting apps— Bambino, Bubble, UrbanSitter, and others. These have begun to replace reliance upon nearby networks of grandparents or neighborhood teenage and girls and boys who offered their help as caregivers for small children while the parents were away. In Uber-like style one can now book a total stranger—well screened, pre- sumably safe and certified—to work for several hours taking care of one’s kids. As the owner of the RockMyBaby app observes, “These parents are using their phone or tablet for everything else, so why not childcare?” (Reidy 2017) Sure, why not?

For those who ponder the reweaving of social activities and relationships in this period of history, the inquiries in Wittgenstein’s later writing offer a philo- sophical ethnography useful in interpreting new ‘forms of life’ as they arise and comparing them to earlier varieties of cultural practice. Researchers have begun to study a wide range of patterns of online sociability mainly through the use of surveys and focused interviews. Thus, in recent years the Pew Research Center has tracked evolving patterns of dating and romance mediated by smartphone apps and websites of various kinds. While the Pew reports offer broad scale results from polling data, e.g., that “15% of U.S. adults have used online dating sites or mobile dating apps,” far less attention is paid to what people actually have to say about these experiences—how they describe and interpret the pleasures, problems and dilemmas experienced within and around the online realm (Smith 2016a). What a well-tuned Wittgensteinian inquiry might contribute in such cases is an ability to listen carefully to what people are saying and take note of what emerging patterns in ordinary language reveal about social changes taking shape. Survey results from Pew Research and similar organizations will sometimes include brief quotes from the people they have interviewed, a way to add color to the raw data in the charts and tables. Thus, one Pew study quotes some teenage Net users:

“Like the best thing about texting is that you can think about what you’re going to say. And if you don’t like it, you can always get rid of it until the end. With talking, you can’t really do that.”

“You might be catfished.”

About the best ways to respond to photos on Instagram another teen ex- claimed: “Like all of them. Like, like, like, like, like all the pictures. You’re the right cute factor.” (Pew Research Center 2015)

Seldom among Net researchers, however, is there much close attention to or prob- ing of the ways that turns of phrase, neologisms, and creative slang accompany the use of the digital devices and systems as they infuse social and cultural practice. When I say “Catfished!” I mean . . . (For the record, ‘catfished’ indicates that one is being lured into a relationship by means of a fictitious online persona.)

An even more consequential sphere in which digital electronics affects ev- eryday human relationships and communications has to do with the fascinating, even troubling, collection of developments presented the coming of Artificial Intelligence, robots and automation. A steady flow of inquiry on issues of this kind, including research at Oxford and MIT, points to the distinct possibility that one third to one half or more the workforce will soon be replaced by algorithms, robots and ‘smart’ mechanical devices of various kinds (Frey and Osborne 2013). Recent opinion surveys indicate that everyday people are aware that such changes are on the horizon but tend to believe (for whatever reason) that their own line of work will not be affected (Smith 2016b).

As one reads the various reports, surveys and projections about A.I., robots and the future of employment, it’s revealing to ask: Who’s involved in conversation and thinking about employment and the new technologies? What are they saying? How are they talking about it? At present the conversation about the issues appears to be limited to those in research labs, high tech firms, and university computing centers. Almost never do conversations about these widely anticipated transformations involve every day working people. Instead we ‘round up the usual suspects’: Bill Gates, Elon Musk, Steven Hawking, Silicon Valley CEOs, as well as business school big wigs, and solicit their erudite views (Larson 2017). But the populace most likely to be affected—millions of present-day workers along with young people preparing to enter the workforce—is almost never asked to reflect or comment upon the jarring social earthquakes that could arise in the not too distant future. This is particularly evident in the U.S.A. where labor unions have been driven nearly to extinction during the past forty years, leaving no organized channels to give voice to the concerns of ordinary workers. The everyday life experiences, language domains, language games, and accustomed forms of life of such people are seldom brought into focus.

An interesting possibility recently discussed in forward looking technology sites on the net is that the elimination of full time jobs in the economy and their replacement by A.I., robots, automation, and part time work—the so called ‘gig economy’—is already strongly related to patterns of living (forms of life?) in which video games become not a mere diversion in one’s off hours, but the primary focus of experience, activity fulfillment in a person’s life. In other words, one manages to scratch together enough income through whatever means to support oneself at a minimal level, while one’s primary reality becomes the excitement, certainty and satisfaction of a novel form of life—gaming. As online writer and devoted gamer Frank Guan comments: “For all the real and purported novelty of video games, they offer nothing so much as the promise of repetition. Life is terrifying; why not, then, live through what you already know—a fundamental pulse, speechless and without thought?” (Guan 2017).

6. Beyond S.T.S. Frameworks

One way to appreciate possible applications of the later Wittgenstein in studies of technology is to notice that they offer a helpful pathway for inquiry within the field of discourseanalysis, onethat focuses onwhatpeople say anddoastheyinteractwithof technicalthings. Somewhatcomparabledomainsofinquiryincludethediscoursean- alytical methods in linguistics, rhetoric, cultural studies, gender studies, media stud- ies, and other fields of scholarship all of which have made important contributions ( Tannen et al. 2018). A notable and welcome development in recent years is that serious philosophical applications of Wittgenstein study of discourses about tech- nology have begun to emerge. The writings of Mark Coeckelbergh, among others, now exercise a leavening influence in technology studies, a scholarly field long dominated by sociologists and anthropologists (Coeckelbergh 2017; Coeckel- bergh and Funk 2018).

As I’ve indicated, my own pathway is primarily that of political theory and its central questions, ones about order, justice, power community, and the like, especially in the Western tradition. Taking aim on the relevance of technology for such questions I sometimes employ Wittgensteinian probes as a way to clarify my thinking or to engage in occasional disruptions when it seems that important discussions have gotten too rigid or simply stuck. An example here concerns widely echoed claims that the Internet has become a fertile seedbed for the revitalization of democracy (Benkler 2006). While this conclusion seemed fairly plausible in the early days of the twenty-first century, it is now vexed by mounting evidence that Net platforms have fallen under the dominance of media monopolies and the power of billionaire oligarchs who control them. Beyond that are increasing signs that the communication of everyday people on the Net are infected by a wide range of discourse pathologies. Widely reported symptoms include: a preference within social media for repeating ‘fake news’ over verifiable facts; ‘computational propaganda’ targeted in ways that undermine communications basic to national elections (Howard et al. 2018); excesses of flaming, trolling, bullying, and other forms of speech that seem more far more compatible with authoritarian politics than with the reasonable debates and deliberations of a healthy democracy (Har- ris 2016). In short, the language and social practices of the Net have become a fertile domain for techno-political Wittgensteinian research and diagnosis. What are people saying, in what settings and to what effect? Why is so much of today’s Internet discourse openly, politically toxic?

My early, naïve impression that Wittgenstein’s own carefully examined ‘games,’ language games, are often quite playful and wonderfully bizarre has found confirmation in the interpretations of scholars of philosophy and literature. Yes, they argue, the ‘Bring me a slab’ passage and similar ones in the Investigations and later writings bear a remarkable similarity to the amusing perplexities in the dramas of Samuel Beckett and playwrights in the theatre of the absurd. By the same token, the direct appropriation of Wittgensteinian dialogues and sensibilities within the plays of Tom Stoppard, e.g., Rosencrantz and Guildenstern Are Dead and Doggs Hamlet, Cahoot’s McBeth show a distinct resonance between philosophy and drama, namely the invocation of everyday speech and its quandaries to illuminate basic questions about the human condition. Hence, Martin Puchner argues that that Wittgenstein moved beyond notions of play in the game of chess with its narrowly bounded logic to explore playing within a far wider domains of speech and activity including those prevalent in the theatre. He writes:

His language plays are scenarios in which certain problems are staged, that is, placed in the mouths of characters during particular scenes. In these dramatic experiments, different versions of a certain problem or scene can be tried out, questions posed, and conclusions drawn. (Puchner 2015)

Puchner goes on to argue that that this avenue in Wittgenstein’s thinking is not an abandonment of his earlier concerns for logic and abstraction, but rather a differ- ent, more promising route into the very same territory:

The only difference is that it is not to be sought in theoretical statements, but rather in the simplicity of language plays. Their clarity throws light on a messy world from which it abstracts and onto which it imposes its own logic. (Puchner 2015)

My attempt here has been to suggest that Wittgenstein’s playful, dramatic approach might be applied to within today’s field of technology studies. In contrast, if one picks up a journal in science and technology studies (S.T.S.) or reads a new book or consults the catalog of topics for one of the periodic conferences what one finds these days is the significance of schools of thought—theory centered research, thinking and publishing that map out central concerns, categories and approaches within an established framework. Among these are social construction theory, cultural studies of technology, actor network theory, innovation research, and the like. At least in my reading of this material—and perhaps I am not being sufficiently generous—the emphasis in much scholarly writing in S.T.S. at present, regardless of the particular substantive topic in question, tends—first and foremost—to parade the theoretical schema, the favored specialized terms, the well branded framework in which it is pitched. From there the results that roll out are often entirely predictable. Yes, the work is sometimes interesting, credible and even valuable. But in my reading much of the effort is devoted primarily to demonstrating that the scholar is a member in good standing of a particular theory club rather than to exploring important topics in new, revealing ways. To a great extent, in my view, much the scholarship in science and technology studies has become a taxonomy factory, obscure and inward looking, replete with categories that describe ‘constructions,’ ‘networks,’ ‘actants,’ and other structural features, devoid of insights that might enliven public debate about technology or anything else (Felt et al. 2016).

Hence, to newcomers interested in studying technology and human affairs, I’m inclined to ask: What do you have to say about these matters? Are you listen- ing carefully to what others in relevant locations are talking about and seek to understand? Please don’t just tell us where your project is situated within this or that received intellectual agenda or its pedigree within a particular ‘theory’ mafia of which you are a card-carrying member. What does your own life experience, your sense of the world, your education and preparation, what you’ve what you have observed, heard, pondered and done, your vision of technology and social relations, have to offer people who might benefit from hearing your thoughts?

In sum, I’d suggest taking down the prepackaged, increasingly standardized, IKEA-like intellectual scaffoldings of today’s S.T.S., going out into the world and finding one’s own voice. For those who launch forth on journeys of that kind, the writings and spirit of Ludwig Wittgenstein are bound to be wonderful companions.

References

Benkler, Yochai. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press.

Coeckelbergh, Mark. 2017. Using Words and Things: Language and Philosophy of Technology. Abingdon: Routledge. https://doi.org/10.4324/9781315528571 Coeckelbergh, Mark, and Michael Funk. 2018. “Wittgenstein as a Philosopher of Technology: Tool Use, Forms of Life, Technique, and a Transcendental Argument.”

Human Studies 41(2) (June): 165–91. https://doi.org/10.1007/s10746-017-9452-6 Felt, Ulrike, Rayvon Fouché, Clark Miller, and Laurel Smith-Doerr, eds. 2016. The Handbook of Science and Technology Studies. Cambridge, MA: MIT Press.

Frey, Carl, and Michael Osborne. 2013. The Future of Employment: How Susceptible Are Jobs to Computerisation? Oxford University Engineering Sciences Department and Oxford Program on the Impacts of Future Technology. http://www. oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf. Accessed September 15, 2018.

Guan, Frank. 2017. “Why Ever Stop Playing Video Games.” Vulture (February 22). http://www.vulture.com/2017/02/video-games-are-better-than-real-life.html. Accessed September 15, 2018.

Harris, Tristan. 2016. How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist. http://www.tristanharris.com/2016/05/ how-technology-hijacks-peoples-minds%e2%80%8a-%e2%80%8afrom-a- magician-and-googles-design-ethicist/. Accessed September 15, 2018.

Howard, Philip, Samuel Woolley, and Ryan Calo. 2018. “Algorithms, Bots, and Political Communication in the US 2016 Election: The Challenge of Automated Communication for Election Law and Administration.” Information Technology

& Politics 15(2): 81–93. https://doi.org/10.1080/19331681.2018.1448735

La Porte, Todd, ed. 1975. Organized Social Complexity: Challenge to Politics and Policy. Princeton, NJ: Princeton University Press.

Larson, Quincy. 2017. “A Warning from Bill Gates, Elon Musk, and Stephen Hawking.” freeCodeCamp (February 19). https://medium.freecodecamp.org/ bill-gates-and-elon-musk-just-warned-us-about-the-one-thing-politicians-are-too-scared-to-talk-8db9815fd398. Accessed September 15, 2018.

Mumford, Lewis. 1946. “Gentlemen: You Are Mad!” Saturday Review of Literature

(March 2): 5–7.

Perrow, Charles. 1984. Normal Accidents: Living With High-Risk Technologies. New York: Basic Books.

Pew Research Center. 2015. “Teen Voices: Dating in the Digital Realm” (October 1). http://www.pewinternet.org/online-romance/. Accessed September 15, 2018.

Pitkin, Hanna. 1972. Wittgenstein and Justice: On the Significance of Ludwig Wittgenstein for Social and Political Thought. Berkeley: University of California Press. Plato. 1963. “Phaedrus.” In The Collected Dialogues of Plato Including the Letters, ed. Edith Hamilton and Huntington Cairns. Bollingen Series. Princeton, NJ: Princeton University Press.

Puchner, Martin. 2015. “Wittgenstein’s Language Plays.” Philosophy and Literature 39(1): 107–27. https://doi.org/10.1353/phl.2015.0007

Reidy, Tess. 2017. “Babysitting Apps Boom as Parents Bid to Reclaim Free Time.” The Guardian (February 25). https://www.theguardian.com/lifeandstyle/2017/ feb/25/parents-babysitting-apps-boom-childcare. Accessed September 15, 2018. Smith, Aaron. 2016a. 15% of American Adults Have Used Online Dating Sites or Mobile Dating Apps. Pew Research Center. http://www.pewinternet.org/2016/02/11/15-percent-of-american-adults-have-used-online-dating-sites- or-mobile-dating-apps/. Accessed September 15, 2018.

Smith, Aaron. 2016b. Public Predictions for the Future of Workforce Automation. Pew Research Center. http://www.pewinternet.org/2016/03/10/public-predictions- for-the-future-of-workforce-automation/. Accessed September 15, 2018.

Sparrow, Betsy, Jenny Liu, and Daniel Wegner. 2011. “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science 333 (August 5): 776–78. https://doi.org/10.1126/science.1207745

Tannen, Deborah, Heidi Hamilton, and Deborah Schiffrin, eds. 2018. The Handbook of Discourse Analysis, 2nd ed. New York: Wiley-Blackwell.

Winner, Langdon. 1975 “Complexity and the Limits of Human Understanding.” In Or- ganized Social Complexity: Challenge to Politics and Policy, ed. Todd La Porte, 40–76. Princeton, NJ: Princeton University Press.

Winner, Langdon. 1986. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago: University of Chicago Press.

Winner, Langdon. 2017. The Cult of Innovation: Its Colorful Myths and Rituals. https:// www.langdonwinner.com/other-writings/2017/6/12/the-cult-of-innovation-its- colorful-myths-and-rituals. Accessed September 15, 2018.

Winner, Langdon. n.d. “Beyond Techno-Narcissism: Self and Other in the Digital Public Realm.” Unpublished manuscript.

Wittgenstein, Ludwig. 1958. Philosophical Investigations, 3rd ed. New York: Macmillan.

Wittgenstein, Ludwig. 1995. Wiener Ausgabe, vol. 3, p. 87. Berlin: Springer.