Recent-ish publications

Review of Bitstreams: The Future of Digital Literary Heritage' by Matthew Kirschenbaum

Contribution to 'Archipiélago Crítico. ¡Formado está! ¡Naveguémoslo!' (invited talk: in Spanish translation with English subtitles)

'Defund Culture' (journal article)

How to Practise the Culture-led Re-Commoning of Cities (printable poster), Partisan Social Club, adjusted by Gary Hall

'Pluriversal Socialism - The Very Idea' (journal article)

'Writing Against Elitism with A Stubborn Fury' (podcast)

'The Uberfication of the University - with Gary Hall' (podcast)

'"La modernidad fue un "blip" en el sistema": sobre teorías y disrupciones con Gary Hall' ['"Modernity was a "blip" in the system": on theories and disruptions with Gary Hall']' (press interview in Colombia)

'Combinatorial Books - Gathering Flowers', with Janneke Adema and Gabriela Méndez Cota - Part 1; Part 2; Part 3 (blog post)

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repositories PURE here, or in Humanities Commons here

Radical Open Access

Radical Open Access Virtual Book Stand

'"Communists of Knowledge"? A case for the implementation of "radical open access" in the humanities and social sciences' (an MA dissertation about the ROAC by Ellie Masterman). 

Community-led Open Publication Infrastructures for Monographs (COPIM) project

Wednesday
Oct132010

Affirmative media theory and the post-9/11 world (part 2)

(The following is a slightly revised version of a text first published on 21 September, 2010, by the Creative Research Centre at Montclair State University. Part 1 of 'Affirmative Media Theory and the Post-9/11 World', again first published by the Creative Research Centre, is available below.)

 

To be sure, there’s something seductive about the thought of producing the kind of big idea or constructive theoretical discourse that is able to capture and explain how the world has changed and become a different place after 9/11. Let’s take just the most frequently rehearsed of those examples with which we are regularly confronted: that the awful events at the World Trade Center and Pentagon on that day in 2001 are connected to the ‘war on terror’, the ‘axis of evil’, the ‘clash of civilizations’, the introduction of the PATRIOT Act, the wars in Afghanistan and Iraq, the abuses in Abu Ghraib, indefinite detention at Guantanamo Bay, the so-called ‘global economic crisis’ that began in 2008, the election of Barak Hussein Obama in 2009, the continuing debate over the place of Muslims in US society - even the ‘return to the Real’ after the apparent triumph of (postmodern theories of) the society of the spectacle, the simulacrum and the hyper-real.

Yet when it comes to deciding how to respond to events and narratives of this sort – which we must, no matter how much and how often they are framed as being ‘self-evident’ – do we not also need to ask: why do big ideas and constructive theoretical discourses appear so compelling and refreshing at the moment, in these circumstances in particular? What exactly is the nature of this sense of frustration and fatigue with thinkers and theories – let’s not call them deconstructive – whose serious understanding of, and strenuous engagement with, antagonism, ambiguity, difference, hospitality, responsibility, singularity and openness, renders them wary of too easily dividing history into moments, movements, trends or turns, and cautious of creating strong, reconstructive, thirst-quenching philosophies of their own? From where does the desire spring for what are positioned, by way of contrast, as enabling and empowering systems of thought? Why here? Why now? And, yes, what is the effectivity of such ideas and discourses? What do they do? How can we be sure, for instance, that they don’t function primarily to replicate the forces of neoliberal capitalist globalisation?

To repeat: none of this is to claim big ideas and ‘constructive, explanatory’ discourses aren’t capable of being extremely interesting and important. Of course they are (especially in the hands of philosophers as consistently creative, challenging and sophisticated as Badiou, Hardt and Negri, Stiegler and Žižek). Yet how are we to decide if the idea of the post-9/11 world, persuasive though it may be, is viable, ‘capable of functioning successfully’, of being ‘able to live’ with the ‘enigma that is our life’, if this overarching-concept is so easily incorporated – in these ‘particular circumstances’ especially – into inhospitable, violent, controlling discourses or totalizing theoretical explanations (or posturing displays of male power and intellect)?

Let me raise just a few of the most obvious issues that would need to be rigorously and patiently worked through:

How is the use of the ‘post’ in this prepositional phrase to be understood? Is it referring to that which comes afterwards in a linear process of historical progression? Is the post meant to indicate some sort of fundamental fracture, boundary or dividing line designed to separate the pre-9/11 world from what came afterwards? Or is the post being used here to draw attention to that which, in an odd, paradoxical way comes not just after but before, too, just as ‘post’ is positioned before ‘9/11 world’ in the phrase ‘post-9/11 world’? In other words, does post-9/11 mean a certain world has come to an end, or is it more accurate to think of 9/11 and what has happened since as a part of that world, as that world in the nascent state?  Is the concept of the post-9/11 world referring to the coming of a new world, or the process of rewriting some of the features of the old? 

What is meant by ‘9/11’? Whose 9/11? Which 9/11? Arundhati Roy, writing in September 2002, is able to locate a number of places around the world for the 11th of September has long held significance:

Twenty-nine years ago, in Chile, on the 11th of September, General Pinochet overthrew the democratically elected government of Salvador Allende in a CIA-backed coup...

On the 11th September 1922, ignoring Arab outrage, the British government proclaimed a mandate in Palestine, a follow up to the 1917 Balfour Declaration [which]... promised European Zionists a national home for Jewish people...

It was on the 11th September 1990 that George W. Bush Sr., then President of the US, made a speech to a joint session of Congress announcing his Government’s decision to go to war against Iraq.

(Arundhati Roy, ‘Come September’, The Algebra of Infinite Justice (London: Flamingo, 2002) p.280, 283, 288-289.)

Of course your website indicates that by 9/11 you mean the terrible attacks on the World Trade Center in New York in 2001. I have no wish to detract from the pain and suffering associated with those events. The question arises nonetheless: on what basis can we take the decision to single out and privilege those tragic events over and above the others Roy identifies that also took place on 9/11? How can we do so, and how can we speak of what you refer to as a ‘post-9/11 generation’, without being complicit in those processes by which the attacks in New York have already been appropriated by a range of social, political, economic, ideological, cultural and aesthetic discourses for reasons to do with security, surveillance, biopolitics, justifying the wars in Afghanistan and Iraq and so on (discourses which can make the experience of writing about 9/11 fraught, to say the least)?

This is not to imply a decision to privilege 9/11/01 can’t be made. It’s merely to point out that such questions need to be addressed if this decision is to be taken responsibly and the implications of doing so for the ways in which we teach and write and act assumed and endured.

As for the last part of this phrase (you’ll have gathered there’s nothing ‘inherently’ viable about this concept for me), is it possible to begin to creatively think and imagine using the idea of a post-9/11 ‘world’ without universalizing a singularly US set of events? After all, even the formulation 9/11, with its echo of 911, seems very North American: in the UK we often tend to refer to September 11.

Yes, the Twin Towers were a symbol of World Finance Capital. Yes, the attacks on them were mediated around the world in ‘real time’. Yes, an article in Le Monde published the next day declared ‘We are all Americans! We are all New Yorkers’. (Has the phrase ‘post-9/11 world’ been chosen deliberately to draw attention to American-led neoliberal globalisation? It’s certainly difficult to propose alternatives to either with regard to the world’s social imaginary without risking being made to appear fanatical or extremist.)  Nevertheless, on what basis can we justify totalizing or globalizing these specific events in this manner? And how can we do so without inscribing 9/11 in the logic of evaluation inherent to neoliberalism’s audit culture (‘in the sense that the Holocaust’s singularity and horror would “equal” that of 9/11’ perhaps, but that of Hurricane Katrina or the Deepwater Horizon oilrig explosion would not);  or participating in the way 9/11 has often been made to overshadow other world historical events in the mythic imaginary: the dropping of the atomic bomb on Hiroshima on 6 August, 1954; Nixon’s decoupling of the US dollar from the gold standard in 1971 (which can be seen as one of the roots of the current economic crisis); the gas disaster at the Union Carbide factory in Bhopal on December 2-3, 1984; the 1999 alter-globalisation protests in Seattle; the 2003 invasion of Iraq, recently described by the ex-head of MI5 in the UK as having ‘radicalised a whole generation of young people... who saw our involvement in Iraq... as being an attack on Islam’ – and that’s to name only those events that come most readily to mind? 

Even if we confine ourselves to acts of non-state terrorism, there’s the Oklahoma City bombing of 19/4, 1995; the Madrid bombing of 3/11, 2004; London 7/7; and the attacks in Mumbai of November 2008.  Why would we not try to creatively think and imagine using the concept of a post-2-3/12 world? A post-19/4 world? A post-7/7 world?

Whose post-9/11 world is this exactly? Who wants this post-9/11 world?


Sunday
Sep192010

Affirmative media theory and the post-9/11 world (part 1)

(The following is a slightly revised version of a text first published on 2 September, 2010, by the Creative Research Centre at Montclair State University. Part II of 'Affirmative Media Theory and the Post-9/11 World', again first published by the Creative Research Centre, is now available above.)

 

Thank you for the invitation to contribute to your born-digital, dynamic, nimble, open-source, collaborative space at Montclair State University. I’m very happy to join the conversation of your Creative Research Centre and take part in your symposium, ‘The Uses of the Imagination in the Post-9/11 World’.

You’ve asked me to address ‘the inherent viability of the concept of the “post-9/11 world”’ and explain what this ‘over-arching concept’ means to me.  Perhaps you’ll forgive me, then, if I begin by telling you a little about my own research. This currently involves a series of born-digital, open, dynamic, collaborative projects I’m provisionally calling ‘media gifts’. Operating at the intersections of art, theory and new media, these gifts employ digital media to actualise critical and cultural theory. As such, their primary focus is not on building a picture of the world by establishing what something is and how it exists, before proclaiming, say, that we’ve moved from the closed spaces of disciplinary societies to the more spirit- or gas-like forces of the societies of control, as Gilles Deleuze would have it.

Instead, the projects I’ve been working on over the last few years – which include a ‘liquid book’, a series of internet television programmes, and an experiment that investigates some of the implications of internet piracy through the creation of an actual ‘pirate’ text   – are instances of media and mediation that endeavor to produce the effects they name or things of which they speak.


The reason I wanted to start with these projects is because they function for me as a means of thinking through what it means to ‘do philosophy’ and ‘do media theory’ in the current theoretico-political climate.  I see them as a way of practicing an affirmative media theory or philosophy in which analysis and critique are not abandoned but take more creative, inventive and imaginative forms. The different projects in the series thus each in their own way experiment with the potential new media technologies hold for making affective, singular interventions in the here and now.


The possibility of philosophy today 

Having said that, I want to make it clear I’m not positioning the affirmative media theory I’m endeavouring to practice with these media gifts in a relation of contrast to earlier, supposedly less affirmative, theoretical paradigms.

 

(A desire to avoid positioning the affirmative media philosophy I’m attempting to practice in a relation of contrast to previous theoretical paradigms is one of the reasons I’ve taken the decision not to explicitly relate the media gifts series to the so-called affective turn. For an example of the latter, see Richard Grusin’s recent book on affect and mediality after 9/11, where he writes:

one of the attractions of affect theory is that it provides an alternative model of the human subject and its motivations to the post-structuralist psychoanalytic models favoured by most contemporary cultural and media theorists. Affectivity helps shift the focus from representation to mediation, deploying an ontological model that refuses the dualism built into the concept of representation. Affectivity entails an ontology of multiplicity that refuses what Bruno Latour has characterized as the modern divide, variously understood in terms of such fundamental oppositions as those between human and non-human, mind and the world, culture and nature, or civilization and savagery. Drawing on varieties of what Nigel Thrift calls ‘non-representational theory’, I concern myself with the things that mediation does rather than what media mean or represent.  

(Richard Grusin, Premediation: Affect and Mediality After 9/11 (Palgrave Macmillan: Basingstoke, 2010) p.7)

Another of my reasons for not relating the media gifts series to affect theory lies with the fact that, as I have already intimated, I’m not so interested in developing ontologies or ontological models of understanding the world.

Still another is that, just as such affect theory attempts to do away with oppositions and dualisms, so it simultaneously (and often unconsciously and unwittingly) seems to repeat and reinforce them – in the case of the passage from Grusin above, most obviously between before and after 9/11, between representational and non-representational theory, and between post-structuralist psychoanalytic models and affect theory itself. And that’s without even mentioning the way Grusin’s book is constantly concerned with providing a
representation of the logics and practices of mediation after 9/11; and with explaining what things such as the global credit crunch mean in this context in a manner it’s frequently difficult to differentiate from the kind of cultural and media theory he positions his book as representing an alternative to:

remediation no longer operates within the binary logic of reality versus mediation, concerning itself instead with mobility, connectivity, and flow. The real is no longer that which is free from mediation, but that which is thoroughly enmeshed with networks of social, technical, aesthetic, political, cultural, or economic mediation. The real is defined not in terms of representational accuracy, but in terms of liquidity or mobility. In this sense the credit crisis of 2008 was a crisis precisely of the real – as the problem of capital that didn’t move, of credit that didn’t flow, was seen as both the cause and consequence of the financial crisis. In the hypermediated post-capitalism of the twenty-first century, wealth is not representation but mobility.
            (Richard Grusin, ibid, p.3))

 

In a discussion with Alain Badiou that took place in New York in 2006, Simon Critchley constructs a narrative of this latter kind when describing the ‘overwhelmingly conceptually creative and also enabling and empowering’ nature of the former’s system of thought.  For Critchley, the current situation of theory is characterised, on the one hand, by ‘a sense of frustration and fatigue with a whole range of theoretical paradigms: paradigms having been exhausted, paradigms having been led into a cul-de-sac, of making promises that they didn’t keep or simply giving some apocalyptic elucidation to our sense of imprisonment’; and, on the other, by a ‘tremendous thirst for a constructive, explanatory and empowering theoretical discourse’. It’s a thirst that Badiou’s philosophy apparently goes some way toward quenching. It’s ‘refreshing’, Critchley declares.

This desire for constructive, explanatory and empowering theoretical discourses of the kind offered not just by Badiou, I would propose, but in their different ways by Michael Hardt and Antonio Negri, Bernard Stiegler, Slavoj Žižek, and others, too, is of course understandable. I can’t help wondering, though, if such discourses aren’t also a manifestation, to some degree at least, of what Germaine Greer has characterized as male display (although the books Greer is thinking of are Malcolm Gladwell’s Outliers and Levitt and Dubner’s Freakonomics, rather than Badiou’s Being and Event or volumes by the likes of Nicolas Bourriaud and Marc Auge that put forward theories of the altermodern and supermodernity):

 

Every week, either by snail mail or e-mail, I get a book that explains everything. Without exception, they are all written by men... There is no answer to everything, and only a deluded male would spend his life trying to find it. The most deluded think they have actually found it. ... Brandishing the ‘big idea’ is a bookish version of male display, and as such a product of the same mind-set as that behind the manuscripts that litter my desk. To explain is in some sense to control. Proselytizing has always been a male preserve. ... I would hope that fewer women have so far featured in the big-ideas landscape because, by and large, they are more interested in understanding than explaining, in describing rather than accounting for. Giving credence to a big idea is a way of permitting ourselves to skirt strenuous engagement with the enigma that is our life.

(Germaine Greer, in Germaine Greer, Andrew Lycett and John Douglas, ‘The Week in Books: The Male Desire for Explanation; the Real Quantum of Solace; and Merchandising Fiction’, The Guardian, 1 November, 2008)


Still, as I say, I can recognise the appeal of enabling and empowering theoretical discourses to a certain extent. It’s a different aspect of the current situation of theory as it’s glossed by Critchley I’m particularly concerned with here.

Critchley – who is himself the author of The Ethics of Deconstruction and co-author of Deconstruction and Pragmatism – is careful to name no names as to which exhausted theoretical paradigms he has in mind. But given that a ‘certain discourse, let’s call it deconstructive’, Critchley suggests, is also explicitly placed in a relation of contrast to Badiou’s ‘very different’ creative, constructive philosophy, I wonder if deconstruction is not at least part of what he is referring to?  If so, then I have to say I find it difficult to recognise deconstruction, and the philosophy of Jacques Derrida especially (with which the term deconstruction is most closely associated, and which is very important for me), in any description that opposes it to that which is conceptually creative, enabling, explanatory and empowering. Derrida’s thought is all of these things – although in a different way to Badiou’s philosophical system, it’s true.  The interest of Derrida and deconstruction lies with systems – including what Badiou, in the same discussion with Critchley, refers to as ‘the classical field of philosophy’ – but also with what destabilizes, disrupts, escapes, exceeds, interrupts and undoes systems. And this would apply to Badiou’s own system of thought (‘and this is a system’, Critchley points out). This doesn’t mean deconstruction can be positioned as ‘melancholic’, though, and contrasted to construction and ‘reconstruction’, as Critchley and Badiou would have it.

For all his interest in radical politics, theatre, poetry, cinema, mathematics, psychoanalysis and the question of love, there’s an intriguing return to philosophy, and with it a certain disciplinarity, evident in Badiou’s work (as opposed to the interdisciplinarity associated with cultural studies, say, or the trans-disciplinarity of your CRC). Badiou refers to this as being very much a philosophical decision on his part:

And finally my philosophical decision – there is always something like a decision in philosophy, there is not always continuity: you have to decide something and my decision was very simple and very clear. It was that philosophy was possible. It’s a very simple sentence, but in the context it was something new. Philosophy is possible in the sense that we can do something which is in the classical tradition of philosophy and nevertheless in our contemporary experience. There is in my condition no contradiction between our world, our concrete experiences, an idea of radical politics for example, a new form of art, new experiences in love, and the new mathematics. There is no contradiction between our world and something in the philosophical field that is finally not in rupture but assumes a continuity with the philosophical tradition from Plato to today.

And we can take one further step, something like that. So we have not to begin by melancholic considerations about the state of affairs of philosophy: deconstruction, end of philosophy, end of metaphysics, and so on. This vision of the history of thinking is not mine.  And so I have proposed – in Being and Event in fact – a new constructive way for philosophical concepts and something like a reconstruction – against deconstruction – of the classical field of philosophy itself.

(Alain Badiou, ‘”Ours Is Not A Terrible Situation” - Alain Badiou and Simon Critchley at Labyrinth Books’, NY, March 6, 2006)


Yet what kind of decision is actually being taken here? What is it based on or grounded in? How philosophical is this decision by Badiou?  Couldn’t it be said that any decision to the effect that philosophy is possible, that a ‘reconstruction – against deconstruction – of the classical field of philosophy’ is possible, has to be taken by Badiou in advance of philosophy; and that his decision in favour of a ‘new constructive way for philosophical concepts’ therefore takes Badiou outside or beyond philosophy at precisely the moment he is claiming to have returned to or defended it? As such, doesn’t any such decision do violence not just to deconstruction but also to the classical tradition of philosophy?

These are questions that Derrida and deconstruction can help with. For Derrida’s philosophy is nothing if it is not a thinking of the impossible decision. As someone else associated with deconstruction, J. Hillis Miller, puts it:


Responsibility... must be, if it is to exist at all, always excessive, always impossible to discharge. Otherwise it will risk being the repetition of a program of understanding and action already in place… My responsibility in each reading is to decide and to act, but I must do so in a situation where the grounds of decision are impossible to know. As Kierkegaard somewhere says, ‘The moment of decision is madness’. The action, in this case, often takes the form of teaching or writing that cannot claim to ground itself on pre-existing knowledge or established tradition but is what Derrida calls ‘l’invention de l’autre [the invention of the other’].

(J. Hillis Miller, in J. Hillis Miller and Manuel Asensi, Black Holes: J. Hillis Miller; or, Toward Boustropedonic Reading (Stanford, California: Stanford University Press, 1999) p.491)


From this perspective, what’s so helpful about Derrida’s thought is not that it disavows the possibility of taking a decision in favour of a reconstruction of the classical field of philosophy; it’s that Derrida enables us to understand how any such decision necessarily involves a moment of madness. This is important; because once we appreciate the decision is the invention of the other, of the other in us, we can endeavour to assume, or better, endure ‘in a passion’, rather than simply act out, the implications of this realisation for the way we teach, write and act, in an effort to make the impossible decisions that confront us – including those concerning philosophy - as responsibly as possible. 


The concept of the post-9/11 world

Why am I raising all this here, in response to your invitation to address ‘the inherent viability of the concept of the post-9/11 world’? I’m doing so because if Critchley is right and the current situation of theory is characterised by a thirst for constructive, explanatory and empowering theoretical discourses then, as I say, I can understand this. I can also appreciate that the concept of the ‘post-9/11 world’ may be of service in this context (including, perhaps, in terms of what Badiou refers to as the political name or poetic event). And, of course, it has already been adopted as a new means of historical periodisation by some. But as far as practicing a creative, affirmative media theory or philosophy is concerned, it seems to me that whether what you are referring to as the ‘over-arching’ concept of the post-9/11 world is ‘viable’ or not, in the sense in which my dictionary defines viable - as ‘being capable of functioning successfully, practicable’, as being ‘able to live in particular circumstances’ - is just such an impossible decision.

Thursday
Aug052010

Paper - the most radical technology of all?

Media Gifts, the book, concludes (if that is the right word to use about an open, distributed,  multi-platform, multi-locational, multiple identity book) with a project that has something of an odd status in relation to the others. This is a text on the performance artist Stelarc called ‘Para-site’.

Stelarc experiments with issues concerning the relation between the human, the body, and technology, including new information and communication technologies. So, for example, his Stomach Sculpture is an art work where an extending/retracting structure, designed to operate in the stomach cavity, was inserted into the body. Brainwaves, bloodflow and muscle signals were amplified and broadcast, and the inside of the lungs, stomach and colon filmed and screened - all of which served to highlight and place in question distinctions between the public and the private as the inside of the body was revealed to be at once both internal and external.

In an introduction we co-wrote to Stelarc Mechaniques du Corps/Body Mechanics, a retrospective catalogue for his exhibition at Centre des Arts, Enghien-les-Bains, France, April, 2009, Joanna Zylinska and I argued that Stelarc can best be understood in terms of his self-declared posture of indifference – his desire not to control the performance so much as let it unfold. (So, as is the case with these media gifts projects – although of course our work is very different: for one thing I’m nowhere near as physically brave as he is, especially when it comes to the prospect of pain - Stelarc wouldn’t conduct his artistic experiments with some fixed results or intended outcomes in mind. Rather he would want to remain open to the new and unexpected.)

But Stelarc’s ‘posture of indifference’ can also be read as being ‘in–difference’, as an opening of oneself to what is not in one (e.g. technology). In short, in-difference becomes an hospitality toward an infinite alterity (in the Levinasian/Derridean sense). It’s also a bodily passivity, a letting oneself be-together-with-difference, with-technology.

It’s worth emphasizing that in saying this, Zylinska and I were not referring to the pairing of two separate entities: the ‘human’ and ‘technology’. For us, human agency and human corporeality is always reliant upon, connected to, and becoming with, tekhnē.

As I put in ‘Para-site’,  what Stelarc repeatedly demonstrates is that technology is both fundamental to, and a disturbance of, our sense of the human. The body’s relation to technology is not therefore one of opposites, in which an original, natural, unified human self or identity comes into contact with an external, foreign, alien technology which it can use as a tool or bodily prosthesis, the skin acting as a boundary line to divide and separate the two. Technology is not simply external. ‘Technology is what defines being human’, for Stelarc.’ It’s not an antagonistic alien sort of object, it’s part of our human nature. It constructs our human nature.’ This means that, in the words of Jacques Derrida, ‘there is no natural originary body’, since:

technology has not simply added itself, from the outside or after the fact, as a foreign body… this foreign or dangerous supplement is “originarily” at work and in place in the supposedly ideal interiority of the “body and soul”. It is indeed at the heart of the heart.

(Jacques Derrida, 'The Rhetoric of Drugs', Points... Interviews, 1974—1994 (Stanford, California: Stanford University Press, 1995) pp.244-5)

What Stelarc performs with his investigations into how different developments in technology - robotics, the Internet, virtual reality systems, prosthetics, medical procedures - alter our conception of the human and of the human body, is the way in which technology escapes the control of its inventors to produce unseen and unforseeable changes and possibilities; and thus a future - for the self, the human, for the body and for technology - which can be neither programmed nor predicted.

From this perspective Stelarc’s indifference appears as a far more responsive and responsible way of undertaking an art project. For if you perform in expectation, the actual performative aspect of the performance collapses. Art practice – and the practice of the writer, philosopher or theorist, I would argue - then becomes nothing more than an execution of a programme already decided and mapped out in advance; whereas, as Stelarc says, quoting Wittgenstein, ‘Thinking happens [on] the paper on which you write’. Whatever happens, in other words, only happens in and through the actual performance.

So, when it came to producing my own project on Stelarc, ‘Para-site’, what I did was weave short passages of my own text, together with some passages taken from Stelarc’s writings, to form a kind of non-linear work designed to enter into a prosthetic and parasitical relationship with its ‘host’ subject: Stelarc.  

The thinking behind this was that in order to understand Stelarc’s performances we, too, need to adopt a ‘posture of in-difference’, and experiment with inventing new techniques of analysis which are capable of responding to Stelarc’s performances with an answering in-difference. Like Stelarc, then, we too have to create an hospitable ‘space for an encounter with... what is radically different’, as Zylinska puts it, and rethink the nature, boundaries and performance of the written text accordingly.

Now, because a version had already appeared in ink-on-paper form, I hadn’t initially thought about including ‘Para-site’ in the media gifts series. It was only much later that I began to consider the idea. It is certainly tempting to think I might update this project in order to include it in the series: not only in terms of its content, by referring to Stelarc’s ‘Ear on Arm’, which he’s partially realised now, as well as a number of other works he’s completed since I wrote ‘Para-site’. I might also update it in terms of its form by using the kind of new media I’ve been talking about – internet TV, wikis, peer-to-peer file sharing.

In the end, however, I have decided not to do this. Partly because, while experimenting is closely aligned for me with most kinds of critical consciousness, I don’t want to suggest there’s something intrinsically radical about using new technology for such performative experiments. In fact, the impact or effectivity of such new media experimentation is often less – precisely because a lot of these strategies around open access, free, libre content and free circulation have become ‘ordinary’ thanks to the sheer ubiquity of new technology.    

Derrida made a similar point when explaining why he didn’t continue with the non-linear textual experiments he performed in books such as Glas and The Post Card after computers and software became relatively commonplace:

It was well before computers that I risked the most refractory texts in relation to the norms of linear writing. It would be easier for me now to do this work of dislocation or typographical invention – of graftings, insertions, cuttings, and pastings – but I’m not very interested in that any more from that point of view and in that form. That was theorised and that was done – then. The path was broken experimentally for these new typographies long ago, and today it has become ordinary. So we must invention other ‘disorders’, ones that are more discreet, less self-congratulatory and exhibitionist, and this time contemporary with the computer. What I was able to try to change in the matter of page formatting I did in the archaic age, if I can call it that, when I was still writing by hand or with the old typewriter. In 1979 I wrote The Post Card on an electric typewriter (even though I’m already talking a lot in it about computers and software), but Glas – whose unusual page format also appeared as a short treatise on the organ, sketching a history of organology up to the present – was written on a little mechanical Olivetti.

(Jacques Derrida, ‘The Word Processor’, Paper Machine (Stanford: Stanford University Press, 2005) pp.25-26)

Here, too, then, the responsible decision as to which media to use in these experiments and in what way must be taken in an undecidable terrain. We can’t decide in advance that new media, or IPTV, or internet piracy - or robotics, prosthetics or medical procedures, for that matter - are always and everywhere the political or artistic way to go. As Stelarc’s quote from Wittgenstein neatly illustrates, paper can also be used as a radical, experimental medium to great effect. In fact, such is the ubiquity of new media, that the really radical thing to do today might be precisely to use paper technology.

Which is why, while I do intend to add ‘Para-site’ to the published version of the media gifts series, I’ve decided to leave this particular ‘gift’ in its ink-on-paper form – as a way of emphasizing this.

Tuesday
Jun292010

The academic as public intellectual

Q1. 'I suppose the key question is how far do academics as public intellectuals remain central to serious discussion of the big challenges which face us?'

In the UK when we speak of the intellectual we tend to have in mind figures such as Jean-Paul Sartre or Edward Said, who in 1993 devoted his own Reith Lectures on the subject. However, the likes of Sartre and Said represent just one version or model of the intellectual: that of the political intellectual, someone who expands their role beyond their core area of expertise to both speak to a larger public and attempt to intervene somehow in the political life of society. There are other ways of acting as an intellectual besides this. For instance, the intellectual can also be understood as a socio-professional category: as applying to those whose professions involve them explicitly in engaging with knowledge, ideas, culture or learning. Academics, writers, journalists and artists such as Terry Eagleton, Zadie Smith, Gary Younge and Tracey Emin would all perhaps be intellectuals according to this definition. Or the intellectual can be understood more in cultural terms: as referring to those who engage in learned activities and practices but who have a certain standing in society or culture that provides them with opportunities for addressing a wider audience than is otherwise generally the case for members of their profession. The Nobel Prize winning scientist James Watson – celebrated for his part in the discovery of DNA - would be an example of someone who fits the socio-professional category of the intellectual, and who subsequently tried to move into the cultural arena with his ill-advised comments on race.

The topic is further complicated by the way in which these different definitions of the intellectual often coincide and overlap. For instance, as the cases of Said and Watson illustrate, most of the above conceptions have their starting point in ideas of the intellectual as a socio-professional category. People must have occupations that involve them explicitly in engaging with knowledge, ideas, culture or learning before they can then be thought of as being political or cultural intellectuals. This is why, when it comes to the issue of celebrity influence, although Jamie Oliver is able to use his expertise and celebrity as a TV chef to tackle wider social issues, including the standard of food served to children in schools, he’s not perceived as being an intellectual. Similarly, the musician Chris Martin of Coldplay, with his support for Fairtrade, or the Hollywood actor Ashton Kutcher, with his use of Twitter to promote World Malaria Day, may have involved themselves with wider social and political issues. But they don’t have careers that involve them in engaging with knowledge and ‘learned ideas’. So they may be socially or politically engaged celebrities, but they’re not intellectuals.


Q2. 'Did the intellectual have more influence in the past?'

When people suggest that intellectuals are not as central to the big questions that face us as they were, that they perhaps don’t have as much influence as they did in the past, and have been replaced to a large extent by think-tanks, spin doctors, pundits, even celebrities, what they usually have in mind, as I say, is the kind of figure represented by Sartre and Said - or Bertrand Russell, with his campaigning for nuclear disarmament in the 1950s and 1960s. And because for various reasons those kind of intellectuals are not so visible in the UK mainstream media at the moment – although they’re still around, especially in other parts of the world (Noam Chomsky would be an obvious example) – it’s easy for people to announce, as from time to time they do, that the intellectual is dead, or at best something of an endangered species. But we do still have intellectuals, and influential ones, at that. Even in the UK. Even in academia. It’s just that they’re not necessarily political intellectuals in the Sartre/Said/Chomsky mould. The intellectuals we have in Britain tend to be of the more ‘sociological’ or ‘cultural’ kind.

One explanation for this that has been put forward concerns the way in which, in England at least, intellectuals, historically, have been more closely associated with the ruling elite - gentlemen’s clubs and Oxbridge colleges, as opposed to the cafes and factory shop-floors of the continent – and with the tradition of the gentleman as amateur scholar. Suspicious of abstract ideas – as characterised by the emphasis in France on the universal values of liberty, equality and fraternity – intellectuals here have tended to focus instead quite nostalgically on the people, places, architecture and landscape of England. Think Roger Scruton’s England: An Elegy.

All of which perhaps explains why so many sociological and cultural intellectuals in England today are quite liberal, humanist, middle-brow and, for want of a better word, ‘journalistic’.


Q3. 'Have academics as intellectuals been marginalized by trends such as an increasing distrust of authority and expertise (which means that we hear as much about global warming and the crisis in the Middle East from celebrities as from specialists in international relations or climate science)?'

Actually, it goes further than that: I’m not sure how many such sociological or cultural intellectuals would explicitly claim to be intellectuals at all. This is both because of the myth that England, historically, doesn’t really have intellectuals (that’s primarily a political, abstract, European if not indeed French phenomenon); and because of the pejorative connotations of the term. In England, the intellectual is often viewed quite negatively and in a hostile fashion: as someone who is arrogant, pretentious and full of self-importance; someone who tries to give off an air of superiority by using difficult and overly complex language and ideas. Witness the manner in which the writing of the French philosopher Jacques Derrida – who would not have identified himself as an intellectual in any simple sense - is frequently condemned as being ‘headache-inducingly difficult’. Paradoxically, to be viewed approvingly as being intellectual in England today, it’s better not to be too intellectual at all. So Alain de Botton is generally accepted in England as a more or less intellectual figure, as he can write clearly on ideas and culture and communicate with a wider public, even attain the Holy Grail of a ‘popular readership’; while Jacques Derrida and Judith Butler are often not so well thought of, as their philosophy is regarded as being far too complicated and abstract. (After all, ‘who has ever truly got to the bottom of ... Jacques Derrida’s work?’) This idea that English culture is somehow not very intellectual, even non-intellectual, and that this is by and large a good thing, may also be one reason why we’d apparently rather hear from celebrities on climate change than intellectuals or academic experts. (As articles in the Times Higher Education have made clear, government is not particularly keen on taking advice directly from academic specialists either. This is because what the latter have to say is frequently multivalenced and inconclusive, ‘better at identifying problems... than at offering solutions’. Policymakers much prefer to be advised by think-tanks who can provide ‘hard-evidence... in three bullet points’.)

The above mention of Alain de Botton brings to mind yet another concept of the intellectual, one I haven’t referred to explicitly yet, and which has actually only become prevalent in Britain and the US in the last 15 years or so (I think this time-frame is significant). This is the 'public intellectual' you began with, the very name of which seems to me to be indicative of a certain anxiety over the ability of other, supposedly 'non-public intellectuals' to communicate with the 'outside world'. After all, what would a non-public intellectual be, given that in most definitions of the term the intellectual has to communicate with the public in some way in order to be an intellectual? Anyway, the public intellectual is supposed to do things like write accessible pieces for newspapers and magazines and appear on radio and the TV. By becoming a symbolic entrepreneur in this way, she or he is lauded for having escaped the narrow and limiting confines of their particular institution: be it the university, laboratory, art gallery, NGO or policy institute. Examples would include not just de Botton, but Shami Chakrabarti, Simon Schama and David Starkey.


Q4. 'Are there pressures within the academy which discourage 'reaching out' or wide-ranging generalists who are willing to speak well beyond their core areas of expertise?'

There are certainly such pressures within the academy, yes. For many people those pressures associated with the RAE would no doubt be the first that come to mind in this respect. But there is also a certain amount of pressure being placed on academics these days to act as public intellectuals and to communicate their research and ideas to a wider public outside the institutional context in which they work. The argument here is that, because tax payers fund their research, academics have an obligation to make their work accessible to the public: in the form of communicating with journalists, or by appearing in the media, or even by writing blogs. However, despite having said that it could apply to what I’m doing here, I’m not entirely comfortable with this role of the academic as public intellectual. For one thing, it seems to me to risk going along too closely - and somewhat uncritically (especially in view of the present financial meltdown and general loss of faith in the idea of unbalanced economic growth) - with the current government and research council emphasis on valuing academic research in terms of its potential ‘economic and social impact’ and ability to be useful to business, industry and the public at large.

For another, it often results in a demand being placed upon academics to be inclusive, and to avoid difficult philosophical or theoretical ‘jargon’, in order to communicate better with non-experts and so-called ‘ordinary people’. But don’t academics sometimes need to be difficult, challenging, inaccessible, boring, unproductive, inefficient, uneconomic, non-user friendly? Isn’t that a crucial part of our role, too – both as academics and as intellectuals, public or otherwise? When so much of the rest of contemporary culture places such a high premium on being popular, inclusive, accessible and instrumentally useful, isn’t it important that there are at least some places and spaces for exploring ideas that are difficult, challenging and time-consuming to understand, and which are not always justifiable in strictly economic terms?


Q5. 'Or is that just that, as one door to influence closes, another opens somewhere else, using different media and communications tools?'

I think that in this case there may be something in that, at this particular historical moment at least. For instance, where I work, at Coventry School of Art and Design, we’ve been exploring the idea of using some of the different means of media and communication that are now available to academics at relatively low cost – open access online publishing, peer-to-peer file sharing networks and so on – to disseminate what, for shorthand, might be referred to as ‘intellectual’ ideas. Let me provide you with just one brief example, using material cut and pasted from an article I published recently. This concerns experiments Pete Woodbridge and myself, together with a colleague from the University of Kent, Clare Birchall, have been making with IPTV. IPTV stands for Internet Protocol TeleVision. In its broadest sense, IPTV is the term for all those techniques which use computer networks to deliver audio-visual programming. So YouTube can be thought of as an emerging grass-roots IPTV system, especially as its audience increasingly uses it to distribute audio-visual content that they have created, rather than sharing their favourite video clips from films and TV programmes that have been produced by others.

Many people see IPTV as having the potential to do for the moving image what the web is currently doing for print.  However, the reason we’re experimenting with IPTV is because it seems to us that the UK at the moment contains surprisingly few spaces, other than the university, that are open to intellectual academic work. As I’ve said, the mainstream media are predominantly liberal, humanist, middle-brow and journalistic in approach, their discussions of art, science, literature and philosophy being primarily opinion-based and focused on biographical details. (I’m still waiting for an edition of Newsnight Review to feature a text by Michael Hardt and Antonio Negri, let alone Giorgio Agamben, Franco ‘Bifo’ Berardi or Roberto Esposito.) Meanwhile, many publishers are barely producing books for third year undergraduate students, let alone research monographs aimed at other scholars. There thus seems to be a need to invent new ways of communicating intellectual academic ideas and research both ‘inside’ and ‘outside’ of the university. We want to explore IPTV’s potential for this, and for doing so relatively easily and cheaply: not so much because we believe academics should try to find means of connecting with audiences outside the institution, audiences that scholarly books and journals cannot, or can no longer, reach. We’re not interested in being public intellectuals or some kind of new media personalities. (It’d be hard to find more camera-shy people than us. Which is why we are now experimenting with a less individualistic, presenter-focused way of making these programmes.) Rather, the reason we want to experiment with IPTV is because, as J. Macgregor Wise put it in his contribution to New Cultural Studies, a book Clare and I published a couple of years ago now, different forms of communication ‘do different things’ and ‘have the potential for different effectivities’ - even for leading us to conceive what we do as academics differently.

(This is a revised version of a text originally written in response to questions from  Matthew Reisz of the Times Higher Education on the topic of the (changing) role of the academic as public intellectual. Reisz’s subsequent article was published as Matthew Reisz, ‘Listen and Learn’, Times Higher Education,  28 May-3 June, 2009.)

Friday
Jun252010

The neurological turn and the ambient scholar

Book publication assumes and creates a certain kind of reader--a reader who will be attentive,  patient, and care enough about the topic to read the book, if not all the way through, then at least a substantial portion of it. Web reading, on the other hand, assumes and creates a very different kind of reader--a reader who will skim material, skip from one text to another, and supplement any text with hyperlinks, lateral references, etc.  It also assumes and creates a reader in a more or less constant state of distraction, one who is constantly leaving a text to check email, surf the Web, chat online, etc. 

... There is a mounting body of evidence to suggest that different media wire the brain in different ways... The neurological re-wiring takes place quickest when small repetitive tasks are repeated over and over, reinforcing synaptic pathways and encouraging the associated neural nets to grow--as, for example, clicking a mouse, scanning a web page, etc.

... In my capacity as a series editor, I read a lot of manuscripts, and among the younger hipper scholars, I see a clear tendency to move toward a Web kind of writing, even if the final product is meant to be a print book--texts that lend themselves to skimming, that have much shorter blocks of prose and argumentation, that can be perused in an hour or so and put down without feeling that you have missed too much.  

(N. Katherine Hayles, ‘post for convergence; print to pixels’, posting to the empyre forum, 9 June, 2010)

 

As Katherine Hayles points out, questions of the ‘neurological turn’ are important because ‘they bear directly on what pedagogical strategies will be effective with young people’. But I wonder if there isn’t more at stake even than this. As well as a certain kind of reader and a certain mode of reading, wouldn't such a turn - if we accept the notion - create a certain kind of scholar, too, an idea Hayles seems to be pointing us towards with her comments on the manuscripts she receives for Electronic Mediations Series from ‘younger hipper scholars’? And if this is indeed the direction things are headed, what are the implications of such a neurological turn for the authority of the scholar?

Given that knowledge and research is increasingly being externalized onto vast, complex, multilayered, distributed networks of computers, databases, journals, blogs, microblogs, wikis, RSS feeds, image, video-sharing and other social networks – of which empyre and this post are both a part - to what extent are the ‘post-neurological turn’ scholars who emerge out of or after the current generation of 18-24 year old students still going to be expected to know the field? Will the scholars created within this scenario continue to endeavour to internalize a particular – and what was once perceived as a potentially knowable - branch of knowledge by means of extensive (and intensive) learning, training, reading and study? That’s what would make them scholars, after all. Then again, how can they do so, if they have difficulty integrating even the books they do read into their long term memory because they are ‘in a more or less constant state of distraction, ... constantly leaving a text to check email, surf the Web, chat online’?

Or, since there already seems to be more to read nowadays than ever and less and less time in which to read it (for many of those who work and study in the contemporary university too), will the scholars who are created in this way increasingly give up on this idea of knowing their field deeply and passionately, and having a comprehensive overview of it. Is it likely that they will come to concentrate instead more on developing their specialist search, retrieval and assemblage skills, confident in the belief that, if they need to know something, then they can find it quickly and easily using Google, Wikipedia, Facebook, and a host of Open Access, Open Education, Open Data resources?

In which case, is there a risk that a large part of their authority is going to pass to the administrator, manager or technician? Will scholars themselves increasingly come to resemble such figures? Someone who does not necessarily need to know the knowledge contained in the systems they administer and have access to. Someone who depends for their authority more on an expert ability to search, find, scan, access and even buy knowledge using online journal archives, full text search capabilities, electronic table of contents alerting, citation tracking, Zotero, Mendeley, Scribed and so on, and then organize these fragments into patterns, flows and assemblages.

Or will the kind of phenomena that's being discussed in terms of the neurological turn lead to the emergence of what could be thought of as a rather different form of scholarship? One where scholars won’t get the bulk of their information in concentrated immersive doses, as they might have in the past from sitting down and carefully reading a book or even a journal article. Instead, they will indeed experience more fragmented and distributed flows of smaller bits of information, which nevertheless enable a certain body of knowledge to be built up over a longer period of time in a more ambient fashion? What might be called ambient scholarship. (I recall some people describing the experience of being on Twitter in the early days of its existence in terms of ambient awareness, for example.)

Now, while I’m intrigued by all these trains of thought set in motion by the idea of the neurological turn put forward by Hayles and others, I must admit to having no strong attachment to either of the two main ways of responding to this ‘crisis’, as Hayles calls it, that have been proposed so far: that which suggests we need to learn more about such hypertextual scanning if we want to teach our students more effectively; or which proposes we view the maintenance of the traditional aesthetic values associated with reading books and literary texts as acquiring something of a radical aspect in this context (even in its very reactionary-ness). Nor does it seem to me to be a matter merely of learning how to use both modes of reading and analysis and co-switching between the two as appropriate, which is what Maryanne Wolf proposes at the end of her book on the subject, Proust and the Squid: The Story and Science of the Reading Brain.

What really interests me most about this discussion is the potential this crisis has to produce, not something that encourages those of us who teach to adopt new pedagogical strategies so we can educate our students more effectively. I’m interested in the potential it has to generate what might be called an unteachable moment - in the sense of a crisis in the teaching situation itself in which the very authority of the educator is placed in question.

It’s here that we come to the question of politics. For this moment of crisis, chaos, perplexity and undecidablity is precisely the moment of politics. As I've said before, we can never know for sure whether the legislator – the founder of a new law or institution such as a university – is legitimate or a charlatan, because of the aporia that lies at the heart of authority, whereby the legislator already has to posses the authority the founding of the new institution is supposed to provide him or her with in order to be able to found it. I make this point, not because I think revealing this state of affairs will somehow bring the institution to its knees. It's more to show that the impossibility of any such foundation is also constitutive for an institution such as the university, and so highlight the chance this situation presents to rework the manner in which the university 'lives on', as Derrida might have put it. Writing precisely about the moment of politics, Derrida says this:

once it is granted that violence is in fact irreducible, it becomes necessary - and this is the moment of politics - to have rules, conventions and stabilizations of power.. since convention, institutions and consensus are stabilizations,… they are stabilizations of something essentially unstable and chaotic... Now, this chaos and instability, which is fundamental, founding and irreducible, is at once naturally the worst against which we struggle with laws, rules, conventions, politics and provisional hegemony, but at the same time it is a chance, a chance to change, to destabilize. If there were continual stability, there would be no need for politics, and it is to the extent that stability is not natural, essential or substantial, that politics exists and ethics is possible.

(Jacques Derrida, 'Remarks on Deconstruction and Pragmatism' in Chantal Mouffe (ed.), Deconstruction and Pragmatism (London: Routledge, 1996) pp. 77-88)

I'd see the politics of online open access publishing in much the same terms. This is why I’ve argued that there is nothing that is inherently emancipatory, oppositional, Leftist, radical, resistant, or even politically or cultural progressive about open access, any more than there is about what is called digital piracy. The politics of open access, like those of digital piracy, depend on the decisions that are made in relation to it, the specific tactics and strategies that are adopted, and the particular conjunction of time, situation and context in which such practices, actions and activities take place.

So open access publishing is not necessarily a mode of political resistance. But what does interest me about the transition to the open access publication and dissemination of scholarly research that is occurring at the moment, is the way it is creating at least some ‘openings’ to take this kind of chance to destabilize, change, and think the university (and publishing and thinking) differently - in a way that doesn’t offer simply a lifeline to the University as it currently exists, or advocate a return to tradition and the past. What’s more, the transition to open access is doing so in a fashion that, to my mind anyway, a lot of modes of resistance which operate according to a logic of either/or are not.

The discussion of publishing that took place on the empyre forum in June 2010 began with a quote from Deleuze and Guattari’s A Thousand Plateaus: ‘How can the book find an adequate outside with which to assemble in heterogeneity, rather than a world to reproduce?’. Let me end here with a different quote taken from the same book. Just the next page, in fact. For, to be sure, the different or alternate kind of economy I'm performatively looking toward - with open access, and with these media gifts - is based more on  openness, hospitality and responsibility, and less on individualism, possession, acquisition, competition, celebrity, and ideas of knowledge as something to be owned, commodified, communicated, disseminated and exchanged as the property of individuals. Still, it is unlikely to be an either/or thing: either market capitalist, communist or gift economy. It’ll likely be more multiple, hybrid, operating according to the ‘logic of the AND’:

‘“and... and... and...” This conjunction carries enough force to shake and uproot the verb “to be””. Where are you going? Where are you coming from? What are you heading for? These are totally useless questions... move between things, establish a logic of the AND, overthrow ontology, do away with foundations, nullify endings and beginnings... Between things does not designate a localizable relation going from one thing to the other and back again, but a perpendicular direction, a transversal movement that sweeps one and the other away.'

(Deleuze and Guattari, A Thousand Plateaus: Capitalism and Schizophrenia (London: Athlone, 1988) p.25)

 

(This is a revised version of a text first posted on the empyre forum, 17 June, 2010, as part of a discussion of open access publishing in relation to Publishing in Convergence: http://www.subtle.net/empyre)

Page 1 ... 44 45 46 47 48