Recent-ish publications

Review of Bitstreams: The Future of Digital Literary Heritage' by Matthew Kirschenbaum

Contribution to 'Archipiélago Crítico. ¡Formado está! ¡Naveguémoslo!' (invited talk: in Spanish translation with English subtitles)

'Defund Culture' (journal article)

How to Practise the Culture-led Re-Commoning of Cities (printable poster), Partisan Social Club, adjusted by Gary Hall

'Pluriversal Socialism - The Very Idea' (journal article)

'Writing Against Elitism with A Stubborn Fury' (podcast)

'The Uberfication of the University - with Gary Hall' (podcast)

'"La modernidad fue un "blip" en el sistema": sobre teorías y disrupciones con Gary Hall' ['"Modernity was a "blip" in the system": on theories and disruptions with Gary Hall']' (press interview in Colombia)

'Combinatorial Books - Gathering Flowers', with Janneke Adema and Gabriela Méndez Cota - Part 1; Part 2; Part 3 (blog post)

Open Access

Most of Gary's work is freely available to read and download either here in Media Gifts or in Coventry University's online repositories PURE here, or in Humanities Commons here

Radical Open Access

Radical Open Access Virtual Book Stand

'"Communists of Knowledge"? A case for the implementation of "radical open access" in the humanities and social sciences' (an MA dissertation about the ROAC by Ellie Masterman). 

Community-led Open Publication Infrastructures for Monographs (COPIM) project

Thursday
Jul212011

On the unbound (nature of this) book (version 3.0)

(The following series of posts has been written as version 3.0 of a contribution to Mark Amerika's remixthebook project.

Version 1.0 of this material was first presented at The Unbound Book conference, held at Amsterdam Central Library and the Royal Library in Den Haag, May 19-21, 2011.

Version 2.0 of this material is due to appear as ‘Force of Binding: On Liquid, Living Books (Mark Amerika Mix)’ on remixthebook.com, the companion website to Amerika's remixthebook volume. remixthebook by Mark Amerika will be published by University of Minnesota Press in September, 2011.)

 

What is the unbound book? Can the book be unbound?

•    

Is remixthebook, with its literary, philosophical, theoretical, artistic and poetic mash-ups and accompanying website where visual artists, theorists, new media scholars, philosophers and musicians sample source material, ‘postproducing it into their own remix/theory performances’, a book unbound?

The Oxford Online Dictionary defines the term ‘bound’ as follows:

‘bound in bind …tie or fasten (something) tightly together…;
walk or run with leaping strides…; …a territorial limit; a boundary…; … going or ready to go towards a specified place…; …past and past participle of bind…’

In which case the unbound book would be one that:


had been gathered together and firmly secured, as a pile of pages can be to form a print-on-paper codex volume;

had a certain destiny or destination or had been prepared, going, or ready to go toward a specific place (as in ‘homeward bound’), such as perhaps an intended addressee, known reader or identifiable and controllable audience;

and had been springing forward or progressing toward that place or destiny in leaps and bounds.


Had because the use of the past participle suggests such binding is history as far as the book is concerned.  Today, in the era of online authorship, comment sections, discussion forums, social tags, RSS feeds, YouTube clips, streaming video, augmented reality, 3D graphics, interactive information visualisations, geolocation search capabilities, crowd sourcing, remixes, mash-ups, and texts being generally connected to a network of other information, data and mobile media environments, the book is being disrupted, dislocated, dispersed. So much so that if the book is to have any future at all in the context of these other supports and modes of reading and writing, it will be in unbound form; a form which, while radically transforming the book, may yet serve to save it and keep it alive.



Monday
May302011

Towards a new political economy: Open Humanities Press and the open access monograph

(The following text was presented at OAPEN 2011: The First OAPEN (Open Access Publishing in European Networks) Conference, Humboldt University Berlin, Germany, February 24 – 25. Results of the conference are available here. A version of this text complete with slides is available here.)


My invitation stressed I should ‘focus on practical… ideas on open access that can be realized, not on theoretical thinking’. That’s not too easy for me, as I’m a theorist by profession, albeit one involved in a number of what some people would call ‘practical’ projects. But I’m going to try my best for you.

I thought I’d begin with one very practical idea that is being realised: that represented by Open Humanities Press (OHP), an international open access publishing collective in critical and cultural theory, established by Sigi Jöttkandt, David Ottina, Paul Ashton and myself.

As we know, open access in the humanities continues to be dogged by the perception that online publication is somehow less credible than print, and lacks rigorous standards of quality control. This often leads to both open access journals and book presses being regarded as less trustworthy and desirable places to publish; and as too professionally risky, for early career scholars especially. It’s precisely this perception of open access that Open Humanities Press has been set up to counter.

OHP was launched in May 2008 by an international group of scholars, librarians and publishers, very much in response to the ‘vicious circle’, as Robert Darnton calls it, whereby:

the escalation in the price of periodicals forces libraries to cut back on their purchase of monographs; the drop in the demand for monographs makes university presses reduce their publication of them; and the difficulty in getting them published creates barriers to careers.

In the first instance OHP consisted of a collective of already-existing open access journals in philosophy, cultural studies, literary criticism and political theory: Cosmos and History, Culture Machine, Fast CapitalismFibreculture, Film-Philosophy, Filozofski Vestnik, Image and Narrative, International Journal of Žižek Studies, Parrhesia, Postcolonial Text and Vectors. While these journals are of high quality, many had a problem generating a high level of prestige: because they’re online journals rather than print; and because - although at least two are over 10 years old now - most are relatively new, and as Peter Suber points out, ‘new journals can be excellent from birth, but even the best cannot be prestigious from birth’. The idea of OHP was to bring these journals together under a single umbrella, and raise their profile and level of prestige in the eyes of academics and administrators by way of a meta-refereeing process. To this end OHP has an Editorial Board that includes Alain Badiou, Steven Greenblatt, Bruno Latour and Gayatri Spivak, and an Editorial Oversight Group consisting of a rotating body of 13 scholars drawn from the Editorial Board, which we use to assess our titles according to a set of policies relating to publication standards, technical standards and intellectual fit with OHP’s mission.  As Sigi Jöttkandt stresses, the press operates as an unpaid collective, ‘where editors support one another and share knowledge and skills, very much like an open source software community. And, in fact, one of the things that makes a peer publishing initiative like OHP possible is precisely open source software, such as the Public Knowledge Project’s suite of open source publishing tools.’

The plan when we started was to spend the first few years establishing a reputation for OHP with its journals, before proceeding to tackle the more difficult problem of publishing book-length material open access. Things have developed much faster than we anticipated, however. As soon as OHP launched, a lot of people got in touch asking us when we were going to publish books open access. So in 2009 we established an OHP monograph project, run in collaboration with the University of Michigan Library’s Scholarly Publishing Office, UC-Irvine, UCLA Library, and the Public Knowledge Project headed by John Willinsky at Stanford University.  The idea is to move forward: both open access publishing in the humanities; and the open access publishing of monographs. And we’ve launched our monograph project with 5 high-profile book series:

•    New Metaphysics – eds Bruno Latour and Graham Harman
•    Unidentified Theoretical Objects – ed. Wlad Godzich
•    Critical Climate Change – eds Tom Cohen and Claire Colebrook
•    Global Conversations – ed. Ngugi wa Thiong'o
•    Liquid Books – eds Gary Hall and Clare Birchall

The way the monograph project works is like this: scholars come together ‘around areas of interest through a book series and perform the editorial oversight, manuscript selection and development for that series.’  The resulting books are then run through the University of Michigan's Scholarly Publishing Office’s suite of services, and made freely available full text open access online, as HTML and nearly all of them PDF too. We’re also offering POD and eventually EPUB books. SPO is subsidizing the production and distribution costs and providing its services in kind, in keeping with its mission to provide an array of sustainable publishing solutions to the scholarly community. We’re looking to use print sales to cover (primarily SPOs) production costs, pay author royalties and to subsidize the costs of other OHP titles. It is this partnership with SPO that enables us to afford to publish open access books, without author-pays/publishing fees or external funding; and to maintain high production standards in the process. ‘SPO has infrastructure, scale, and experience; OHP draws together self-organizing editorial teams of senior scholars in various fields of the humanities to provide the editorial functions and peer-review.’

Is this going to enable us to develop an economic model for the long-term open access publication of research in the humanities? To be honest, we don’t know. OHP is not unusual in that respect, however. As Maria Bonn wrote in 2010:

Even those most active in the OA monograph efforts… must concede that our arguments at present are informed mostly by speculation or ideology. Experimentation in open book publishing has been very limited and is still so new as to have generated few results that can be replicated or refuted.

I raise this point, not as a criticism of any such efforts. If we’re going to address the issue of long term economic sustainability, then as Bonn emphasizes, it’s 'important to learn from the different monograph experiments that are taking place and to embark upon more of them’. Nor do I think the inability of any one such experiment, as yet, to definitively resolve this issue, means doing so is ultimately an impossible task. I don’t think there’ll be a magic-bullet, one-size-fits-all answer anyway. However, I do wonder if we haven’t been looking for some of our answers in the wrong place.

So far most of our attention has been on those willing and able to experiment with different economic models of publishing open access monographs - as if we’re all hoping a press somewhere can come up with a solution to the problems of academic publishing that will protect the rest of the scholarly community from the need to change how it functions. Yet I wonder if, in the long run, it isn’t going to require more of a community effort than that, one that will involve the way researchers, authors, libraries, institutions and funding agencies operate, too?

For example, the new, alternative publishing model OHP is pioneering is one where there’s no profit for anyone, since as a scholar-led publisher, our main source of funding comes indirectly via institutions paying our salaries; and, as I say, we’re using the proceeds from POD sales to cover production costs and subsidize the production of other OHP titles.  (So we’re selling POD books, but not charging for the service of publishing books open access.)

Despite this, what we’re experiencing is that some authors – not all, but some, a small number - still insist on viewing us as more or less a ‘classical’ press, only one run by volunteer scholars working to service poor humanities academics by publishing their work open access. For these authors, the traditional author/publisher relation appears to be still very much in place. They’re attracted to all the advantages of the new publishing model that’s offered by OHP: such as a relatively short turn-around time between submission of their final manuscript and its being made freely available online; and the fact OHP is able to make decisions about what to publish, less on the basis of a text’s potential value as a commodity, and more on the basis of its quality as a piece of scholarship. And thus that we can publish books which, in the current economic climate, classical print-on-paper-only publishers might regard as being too difficult, advanced, specialized, radical or avant-garde to take on - because they wouldn’t be able to make a profit or even cover their costs on them.

But these authors also want to continue to have a quite traditional relationship with us as their publisher, and they keep trying to treat us accordingly: arguing, to take just one example – and this is just one example - that more of the proceeds from POD sales should go to them in the form of royalties, and less to us/the community to subsidize the publication of other titles open access.

In a way we perhaps shouldn’t be too surprised by this. The desire for us to operate as an ‘old school’ publisher partly arises out of a lingering fear of the taint of vanity publishing; partly it results from the fact that a conventional publishing relationship with a conventional publisher is the only such relationship most authors have experience of.

Still, we also need to take some responsibility for this double-think ourselves. For isn’t this how most of us in the open access movement make our case? Don’t we encourage colleagues to get involved by reassuring them that open access offers most, if not all, of what the classical print-on-paper-only publishing model offers - only with all the added benefits ‘giving away’ their work for free online can bring? (It’s cheaper, faster, brings greater readership, increases citations, and so on.)

This is certainly how we often present OHP. Hence, as Bonn identifies, while OHP may be innovative in its distribution of labour, it’s ‘quite traditional in its review process, in part to address academic humanities concerns about OA publishing and quality’.  

Now, one can understand why this strategy has been adopted within the open access movement. And let’s be honest, we’re probably going to have to continue with it for some time yet if open access is to keep on growing. However, with the question of economic sustainability in mind, won’t we have to revise this strategy at some point, and open ourselves to the possibility that, if we do want to ‘find a financial model which is appropriate to scholarly humanities monographs’, as the OAPEN website puts it, we can’t necessarily expect the rest of our publishing model to remain largely the same as in the toll-access, print-on-paper-only world.

In saying this, I want to emphasize that I’m not referring to the quality of our production, editing, peer review, design, marketing and promotion. If we decide to, we can maintain classical professional standards in all these respects since, as my OHP colleague David Ottina has pointed out (in personal email correspondence), ‘many of the tasks associated with presses… are rooted in workflows that arose from the materiality of the press itself. Now that every academic has all of the tools for each of these tasks sitting on their desk, those workflows have become vestigial.’ 

Still, one thing we may have to consider, if the open access publication of humanities monographs is going to expand and be economically sustainable over the longer term, is changing the relationship between presses and the rest of the academic community. I’m not sure the bulk of the responsibility for achieving such sustainability can be handed over primarily to those presses that are willing to experiment with different economic models for publishing monographs. As well as increasing the number of presses that are exploring ways of making it possible for book authors in the humanities to publish open access,  might we not also need to experiment with developing a new kind of academic culture and economy. An economy based less on competition, possession, academic celebrity, and ideas of knowledge as something to be owned, commodified and exchanged as the property of individuals, and more on openness, generosity and hospitality. Where authors, librarians and publishers are all seen as being part of the same community, working together to produce and share knowledge and research:

•    with libraries providing sustainable publishing solutions to the scholarly community, or at least their own university’s staff. (Even just getting together to agree to catalogue open access monographs and purchase the POD versions would be a start);
•    authors waiving more of their royalties to subsidize the not-for-profit publication of other open access titles;
•    academics, rather than providing free labour for toll access journals and publishers who don’t allow authors to self-archive copies of their work online, or who charge high annual subscription charges, using this time instead to become actively involved in the process of selecting, developing, editing and publishing open access monographs;
•    and institutions supporting their researchers to publish open access – not just by subsidizing the cost of doing so, but by not disadvantaging authors who publish open access books when it comes to hiring and promotion and so on.

The problem is, of course, as anyone who has any experience of initiating online projects quickly learns, it’s not enough to operate on an ‘if we build it they will come’ basis. One has to either create such a community, perhaps through promotion and advertising, or make use of an already existing community.

At one end of the spectrum, some of those involved with OHP have suggested our contracts should feature a tick box, where authors can explicitly state they would like their royalties to be used to support the publication of other OHP monographs. At the other end, it has been suggested OHP write a manifesto, making it clear we’re in the process of developing a new model of scholarly publishing, consisting of a cooperative community of publishers, authors, scholars and librarians all working together to share knowledge and research, and asking authors to work with us on this basis.

Yet is there a community for the new kind of academic culture and economy we’re pushing toward here that can be either created or tapped into – especially given that a large part of what currently seems to attract authors to open access is the fact that the rest of the conventional publishing model and relationship does indeed remain in place?  Would a project such as OHP not have to act to try to performatively transform and so create the very 'culture'  and ‘community’ in which such a project could - at some point in the future perhaps - be eventually understood and participated in? And do so without any certainty or assurance that this would happen? That’s the kind of practical problem OHP is currently exploring.

In the end, what we can see is that the long-term sustainability of a project such as OHP perhaps depends on a community that does not exist – at least not yet. Rather than being spoken to, represented or addressed, it is a community that has to be created or invented. What we might think of, not so much in terms of Giorgio Agamben’s ‘coming community’ or what, following Jacques Derrida we might call the community to come, but what I would term as the missing community. 

Monday
May022011

The open scholarship full disclosure initiative: a subversive proposal

In 1994 the cognitive scientist Stevan Harnad made a self-professed ‘subversive proposal’.  He suggested that those authors who did not want to sell their writing for profit – a category Harnad saw most scientists and scholars falling into - should make copies of their work freely available in globally accessible online archives. Doing so would enable those authors to both publish their research and make it available to be read all over the world by its intended audience of fellow scientists and scholars. It would also remove one of the chief barriers otherwise erected between those authors and their prospective readers: namely the price-tag that had been placed on their writing in the era of ink-on-paper publication to cover the costs of its reproduction. Some sense of the impact of Harnad’s proposal can be gained from the fact that, although Peter Suber is able to begin his ‘Timeline of the Open Access Movement’ as early as 1966, it’s Harnad’s ‘subversive’ intervention from 1994 that is identified as the occasion when self-archiving was first proposed.  

From there the idea eventually developed into what is today known as Green Open Access. This is where authors do make their research - which may or may not have already been published elsewhere in a journal or with a publisher of the author’s own choosing – available online for free to anyone with access to the Internet simply by self-archiving digital copies of it in central, subject or institutionally-based online repositories, such as arXiv or PubMed Central. Indeed, such is the general acceptance of Harnad’s subversive proposal and the Green Road to open access that on March 11, 2009 US President Barack Obama signed into law a bill making permanent the National Institutes of Health Public Access Policy. This mandates that any research funded by the NIH is deposited in PubMed Central within a year of its publication.

Toward the end of this piece I’m going to make a proposal of my own. It’s intended as a modest supplement to that of Harnad, yet I believe it has the potential to be even more subversive. Among other things, it has radical implications for the very system that’s used to provide quality control when it comes to publishing, not just in open access repositories and online journals (the latter being Gold Open Access as opposed to the Green of self-archiving), but in paper journals too. I’m referring to peer review and editing, particularly by established journals of known quality. However, before I make this second subversive proposal – which I’m provisionally calling the ‘Open Scholarship Full Disclosure Initiative’ – I want to say something about where the motivation for it comes from. While it’s partly inspired by Harnad, it’s influenced more directly by two recent articles: a piece of journalism by Ben Goldacre on the relationship between funding source, impact factor and journal prestige in medical research; and an academic essay on cultural studies and the politics of journal publishing by Ted Striphas.

Goldacre is a medical doctor who writes the Bad Science column in the UK newspaper The Guardian. On February 14, 2009 he published an item titled ‘Funding and Findings: The Impact Factor’. In it Goldacre discusses a study in the British Medical Journal he describes as being ‘quietly one of the most subversive pieces of research ever printed’.  I think he may be right. The research in question, by Tom Jefferson et al., examined every study of the influenza vaccine. Specifically, it used statistics and quantitative analysis to investigate whether the source of funding ‘affected the quality of a study, the accuracy of its summary, and the eminence of the journal in which it was published’. According to Goldacre it’s common knowledge that, when it comes to research in medicine, industry-funded studies are ‘more likely to give a positive result for the sponsors' drug'. This was certainly found to be the case here with regard to the research on influenza vaccines. But by looking at where studies are published, what this new research by Tom Jefferson and his colleagues revealed is that the impact factor for industry-funded studies is more than twice that of government-funded studies; and that studies sponsored by the pharmaceutical industry are far more likely to get into the larger, more prestigious journals of supposedly known quality than studies sponsored by the government.

When it comes to the journal impact factor – i.e., how often, on average, research in a given journal is subsequently cited in other research publications according to the ISI Web of Science database - the average for the 92 studies funded by government that were looked at was 3.74, while for the 52 studies with partial or total industry funding it was a much more significant 8.78; and this despite the fact that there was no difference between the two in terms of ‘methodological rigour, or quality’, or ‘where people submit their articles’. This leads Goldacre to conclude that ‘an unkind commentator’ might put forward at least one reason why, for all the supposed rigour of the academic editing and peer-review system of quality control, industry trials might be more successful with their submissions to journals which have higher impact figures and which, as a consequence, are considered to be the ones publishing the best quality articles: it’s quite simply because many ‘journals are businesses, run by very huge international corporations, and they rely on advertising revenue from industry, but also on the phenomenal profits generated by selling glossy “reprints” of studies, and nicely presented translations, which drug reps around the world can then use'.

Some of the issues raised in Goldacre’s short piece on funding sources and their relation to impact factor and the perceived prestige of journals tally with the work of a cultural studies scholar from Indiana University in the US, Ted Striphas. Striphas has undertaken some extremely interesting research into the political economy of academic journal publishing in general, and that of cultural studies’ journals in particular. In his text, ‘Acknowledged Goods’, Striphas shows how cultural studies has something of a blind spot when it comes to many of the material conditions and practices which make it possible as a field.   Perhaps nowhere is this more the case than with regard to the relationship between cultural studies and the academic book and journal publishing industries – especially as those industries have become increasingly consolidated and profit-intensive in recent the years. Striphas provides the example of Taylor and Francis/Informa, whose cultural studies list currently features over 70 journals. Among them are some of the most highly respected titles in the field, including Cultural Studies, Continuum: Journal of Media and Cultural Studies, Communication and Critical/Cultural Studies, Inter-Asia Cultural Studies, Feminist Media Studies, and Parallax. And yet it might come as something of a shock to many of those in cultural studies - especially those who have published in their journals or peer-reviewed manuscripts for them - to learn that:

One of Informa’s subsidiaries, Adam Smith Conferences... specializes in organizing events designed to open the former Soviet republics to private investment. Other divisions of the company provide information, consulting, training, and strategic planning services to major international agricultural, banking, insurance, investment, pharmaceutical, and telecommunications corporations, in addition to government agencies. Take Robbins-Gioia, for instance. The United States Army recently tapped this Informa subsidiary during an overhaul of its command and control infrastructure. The firm was brought in to assess how well the Army had achieved its goal of ‘battlefield digitization’. The United States Air Force, meanwhile, tapped Robbins- Gioia when it needed help improving its fleet management systems for U-2 spy planes. (Striphas)


It may seem unfair to single cultural studies out like this. After all, it’s not the only field to suffer from something of a blind spot when it comes to the politics of its own publishing practices. Far from it. What makes the existence of such a blind spot so noteworthy in this particular instance is that cultural studies prides itself on being a ‘serious’ political project, as one of its most influential exponents, Stuart Hall, puts it.   According to Hall, the political cultural studies intellectual has a responsibility to ‘know more’ than those on the other side; to ‘really know, not just pretend to know, not just to have the facility of knowledge, but to know deeply and profoundly’.  If so, then as far as Striphas is concerned, this injunction quite simply has to include knowing more about ‘the formidable network of social, economic, legal, and infrastructural linkages to the publishing industry that sustains’ cultural studies and its politically engaged intellectuals, and shapes the conditions in which their knowledge and research ‘can – and increasingly cannot – circulate’.  This is information that can be ignored only at the cost of the integrity of cultural studies’ politics, he insists.

As someone who identifies with cultural studies to a large extent,  I’ve been concerned for some time now with the way in which many cultural studies intellectuals, who are otherwise keen to wear their political commitment on their sleeves, are noticeably less keen when it comes to interrogating their own politico-institutional practices.  The marked lack of interest the majority of those in the field have shown in making their research and publications available open access is a case in point.

Why, given the often overtly radical nature of the content of their work, have those in cultural studies been so reluctant to challenge what John Willinsky describes as the ‘complacent and comfortable habits of scholarly publishing’ in this way?  After all, by making the research literature freely available to researchers, teachers, students, union organisers, NGOs, political activists, protest groups, public libraries, community centres and the wider public alike, on a worldwide basis, open access is frequently positioned as having the potential to break down some of the barriers between the institution of the university and the rest of society, as well as between countries in the so-called ‘developed’, ‘developing’ and ‘undeveloped’ worlds. These are all objectives most of those who identify with cultural studies as a political project would presumably be in favour of, given that just as important as knowing more than the other side, according to Stuart Hall, is the political intellectual’s responsibility to transmit ‘those ideas, that knowledge’, to others.  Yet while other movements and practices associated with digital culture and the open dissemination of knowledge and information, such as Creative Commons, free software, open source and peer-to-peer file-sharing, have often been regarded from a cultural studies perspective as providing models for new regimes of culture, new kinds of networked institutions, and even for new forms of social and political organisation, the open access movement has had comparatively little impact on the field to date.

This is all the more surprising when one considers that compared to, say, the task of constructing an ‘open source society’ or forging an organic connection with a larger emerging historical movement, making copies of their research and publications freely available in globally accessible online repositories or journals is something that is relatively easy for the majority of those in cultural studies to actually bring about. Why, then, have those in the sciences, such as Stevan Harnad, proved to be the more apparently progressive, institutionally, socially and politically, in this respect?

Interestingly, Goldacre and Striphas both end their articles with suggestions for future action. For Goldacre, the ideal would be for all drugs research to be made ‘commercially separate from manufacturing and retailing’ and for all journals to be ‘open and free’. In the meantime, as academics are already ‘obliged to declare all significant drug company funding on all academic articles’, he follows Jefferson et al. in proposing that ‘since their decisions are so hugely influential’, all editors and publishers should be asked to ‘post all their sources of income, and all the money related to the running of their journal’, once a year.  Striphas, in turn, emphasizes the importance of delving below the surface to discover just who the ‘parents and siblings’ of academic journal publishers are, and what other activities they are involved in. To push the point home he cites as a final example Reed Elsevier, one of the main journal publishers in both the ‘hard’ and social sciences. Until as recently as 2007, Reed Elsevier was facilitating the global arms trade through its event planning arm, Reed Exhibitions, who ‘staged the annual Defense Systems and Equipment International (DSEi) event in the London Docklands, and similar events worldwide’. Indeed, Elsevier was motivated to distance itself from the arms trade only after organized action on the part of ‘Campaign Against Arms Trade, along with groups of scholars associated with The Lancet, Political Geography, and other Elsevier journals’.  This leads Striphas to suggest that, by working collectively, it may be possible to put pressure on other academic journal publishers to change their practices, too, no matter how large they may be.

So, responding to both the political and pragmatic undertones of these two pieces, my own ‘subversive proposal’ is as follows: that we, as academics, authors, editors, librarians, publishers and so on - not just in medicine and cultural studies, but in the wider arts and humanities, sciences and social sciences - come together to establish an initiative whereby all academic editors and publishers are indeed asked to make freely available, on an annual basis, details of both their sources of income and funding, and of all the sources of financial income and support pertaining to the journals they run. Furthermore, as part of this initiative, I propose we set up an equivalent directory to the DOAJ and SHERPA/RoMEO directories  - only in this case documenting all these various sources of income and support, together with information as to who the owners of the different academic journals in our respective fields are and, just as importantly, the other divisions, subsidiaries and activities of their various companies, organisations, institutions and associations.

Let me quickly stress that I’m not suggesting all corporately owned journals are the politically co-opted tools of global capitalism, while smaller, independently scholar-produced journals, or those published on a non-profit basis by university presses, learned societies and scholarly associations somehow escape all this. None of this emerges out of a sense of moralism on my part. Some of my best friends are the editors of journals published by large, for-profit, multinational presses, and I am myself on the editorial board of a number of Taylor and Francis journals. It’s not therefore my intention to imply that anyone can be situated sufficiently outside of the forces of global capital to be completely politically and ethically ‘pure’ in this respect. (No one is innocent, as the Sex Pistols used to say.) 

Nevertheless, I believe such a campaign for ‘full-disclosure’ would be of huge assistance in furnishing scholars and researchers in all areas, the humanities, the sciences, and the social sciences, with the knowledge that will enable them to make responsible political and ethical decisions as to who they want to publish with or undertake peer review for - and thus who they want to give their free labour to. For instance, as a result of this initiative and the information obtained some scholars may take a decision not to subscribe to, publish in, edit, peer review manuscripts or otherwise work for academic journals owned by multinationals involved in supporting the military; or journals that have high library subscription charges;   or indeed journals that refuse to endorse, as a bare minimum, the self-archiving by authors of the refereed and accepted final drafts of their articles in institutional open access repositories. (Or they may of course decide that none of these issues are of a particular concern to them and continue with their editorial and peer-review activities as before.)

At the very least, I believe that such an ‘Open Scholarship Full Disclosure Initiative’ would encourage both the editors and publishers of journals, and the owners of academic journal publishers and their siblings and subsidiaries, to behave more responsibly in political and ethical terms. What’s more, it would be capable of having an impact even if the editors and publishers of the larger more established and prominent journals refused to play ball and provide full disclosure themselves. I say this for a number of reasons: because such an initiative would raise awareness of the politics of journal and publisher funding and ownership more generally, regardless; because those editors and publishers who don't provide full disclosure would risk appearing as if they have something to hide; and because it would also hopefully have the effect of encouraging more scholars to conduct research into where the funding of such journals comes from, who their parent companies, institutions and organisations are, and what other activities they are involved in and connected to, and to make the results of their research widely known.

It’s also worth emphasising that such an initiative would not require a huge amount of time and effort on our collective part. After all, ‘Reed Elsevier, Springer, Wiley-Blackwell, and Taylor & Francis/Informa... publish about 6,000 journals between them.’ So to cover 6,000 journals, or somewhere between a quarter and a fifth of all peer-reviewed journals, we only need to research and disclose details of four corporations!  That’s one thing we have to thank the processes of conglomeration and consolidation in the academic journal publishing industry for at least.


(The above text first appeared in Against the Grain, June, 2009)


Thursday
Jan272011

On the limits of openness VI: has critical theory run out of time for data-driven scholarship?

Something that is particularly noticeable about many instances of this turn to data-driven scholarship - especially after decades when the humanities have been heavily marked by a variety of critical theories: Marxism, feminism, psychoanalysis, structuralism, post-colonialism, post-Marxism - is just how difficult they find it to understand computing and the digital as anything more than tools and techniques, and thus how naive and lacking in meaningful critique they often are (Higgen).  Of course, this (at times explicit) repudiation of criticality could be viewed as part of what makes certain aspects of the digital humanities so intriguing at the moment. Exponents of the computational turn are precisely not making what I have elsewhere characterised as the anti-political gesture of conforming to accepted (and frequently moralistic) conceptions of politics that have been decided in advance, including those which see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect and so forth. They are responding to what is perceived as a fundamentally new cultural situation, and the challenge it represents to our traditional methods of studying culture, by avoiding such conventional gestures, and experimenting with the development of fresh methods and approaches for the humanities instead.

In a series of posts on his Found History blog, Tom Scheinfeldt, Managing Director at the Center for History and New Media at George Mason University, positions such scholarship very much in terms of a shift from a concern with theory and ideology to a concern with methodology:

I believe... we are entering a new phase of scholarship that will be dominated not by ideas, but once again by organizing activities, both in terms of organizing knowledge and organizing ourselves and our work... as a digital historian, I traffic much less in new theories than in new methods. The new technology of the Internet has shifted the work of a rapidly growing number of scholars away from thinking big thoughts to forging new tools, methods, materials, techniques, and modes or work which will enable us to harness the still unwieldy, but obviously game-changing, information technologies now sitting on our desktops and in our pockets.

In this respect there may well be a degree of ‘relief in having escaped the culture wars of the 1980s’ - for those in the US especially - as a result of this move ‘into the space of methodological work’ (Croxall) and what Scheinfeldt reportedly dubs ‘the post-theoretical age’.  The problem is, without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges.

Witness one of the projects I mentioned earlier: the attempt by Dan Cohen and Fred Gibb to text-mine all the books published in English in the Victorian age (or at least those digitized by Google).   Among other things, this allows Cohen and Gibb to show that use of the word ‘revolution’ in book titles of the period spiked around ‘the French Revolution and the revolutions of 1848’. But what argument is it that they are trying to make with this? What is it we are able to learn as a result of this use of computational power on their part that we didn’t know already and couldn’t have discovered without it? 

Elsewhere, in a response to Cohen and Gibb’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a matter of scale and timing:

It expects something of the scale of humanities scholarship which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field.

Increasingly, this expectation is something peculiar to the humanities.  ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune… Since the scientific revolution, most theoretical advances play out over generations, not single careers. We don’t expect all of our physics graduate students to make fundamental theoretical breakthroughs or claims about the nature of quantum mechanics, for example. There is just too much lab work to be done and data to analyzed for each person to be pointed at the end point. That work is valued for the incremental contribution to the generational research agenda that it is.

Yet notice how theory is again being marginalized in favour of an emphasis on  STM subjects, and the adoption of expectations and approaches associated with mathematicians and astronomers in particular.

This not to deny the importance of experimenting with the new kinds of knowledge, tools, methods, materials and modes of work and thinking digital media technologies create and make possible, in order to bring new forms of Foucauldian dispositifs, what Bernard Steigler calls hypomnémata (i.e. mnemonics, what Plato referred to as pharmaka, both poisons and cures), or what I am trying to think here in terms of media gifts, into play.  And I would potentially include in this process of experimentation techniques and methodologies drawn from computer science and other related fields such as information visualisation, data mining and so forth. Yes, of course, it is quite possible that in the future ‘people will use this data in ways we can’t even imagine yet’, both singularly and collaboratively (Stowell).  Still, there is something intriguing about the way in which many defenders of the turn toward computational tools and methods in the humanities evoke a sense of time in relation to theory.

Take the argument that critical and self-reflexive theoretical questions about the use of digital tools and data-led methodologies should be deferred for the time being, lest they have the effect of strangling at birth what could turn out to be a very different form of humanities research before it has had a chance to properly develop and take shape. Viewed in isolation, it can be difficult, if not impossible, to decide whether this particular form of ‘limitless' postponement is serving as an alibi for a naive and rather superficial  form of scholarship; or whether it is indeed acting as a responsible, political or ethical opening to the (difference, heterogeneity and incalculability of the) future, including the future of the humanities. After all, the suggestion is that now is ‘not the right time’ to be making any such decision or judgement, since we cannot ‘yet’ know how humanists will ‘eventually’ come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically.

This argument would be more convincing as a responsible, political or ethical call to leave the question of the use of digital tools and data-led methodologies in the humanities open if it were the only sense in which time was evoked in relation to theory in this context. Significantly, however, it is not. Advocates for the computational turn do so in a number of other and often competing senses too. These include:

a) that the time of theory is over, in the sense a particular historical period or moment has now ended (e.g. that of the culture wars of the 1980s);

b) that the time for theory is over, in the sense it is now the time for methodology;

c) and that the time to return to theory or for theory to (re-)emerge in some new, unpredictable form which represents a fundamental breakthrough or advance, although possibly on its way, has not arrived yet, and cannot necessarily be expected to do so for some time, given that ‘most theoretical advances play out over generations, not single careers’.

All of which puts a very different inflection on the view of theoretical critique as being at best inappropriate, and at worst harmful to data-driven scholarship. Even a brief glance at the history of theory’s reception in the English-speaking world reveals that those who announce that its time has not yet come, or is already over, that theory is in decline or even dead, and that we now live in a post-theoretical era, are merely endeavouring to keep it at a (temporal) distance. Rather than having to ask rigorous, critical and self-reflexive questions about their own practices and their justifications for them, those who position their work as being either pre- or post-theory are almost invariably doing so because it allows them to continue with their own preferred techniques and methodologies for study culture relatively uncontested. Placed in this wider context, far from helping to keep the question concerning the use of digital tools and data-led methodologies in the humanities open (or having anything particularly interesting to say about theory), the rejection of critical-intellectual ideas as untimely can be seen as moralizing and conservative.

In saying this I am reiterating an argument made by Wendy Brown in the sphere of political theory. Yet can a similar case not be made with regard to the computational turn in the humanities, to the effect that the ‘rebuff of critical theory as untimely provides the core matter for the affirmative case for it’? Theory is vital from this point of view, not for conforming to accepted conceptions of political critique which see it primarily in terms of power, ideology, race, gender, class, sexuality, ecology, affect and so forth, or for sustaining conventional methods of studying culture that may no longer be appropriate to the networked nature of 21st century post-industrial society. Theory is vital ‘to contest the very sense of time invoked to declare critique untimely’:


If the charge of untimeliness inevitably also fixes time, then disrupting this fixity is crucial to keeping the times from closing in on us. It is a way of reclaiming the present from the conservative hold on it that is borne by the charge of untimeliness.

To insist on the value of untimely political critique is not, then, to refuse the problem of time and timing in politics but rather to contest settled accounts of what time is, what the times are, and what political tempo and temporality we should hew to in political life. 

(Wendy Brown, Edgework: Critical Essays on Knowledge and Politics (Princeton and Oxford: Princeton University Press, 2005) p.4)

Wednesday
Jan122011

On the limits of openness V: there are no digital humanities

Let’s bracket the many questions that can be raised for Deleuze’s thesis on the societies of control (some of which can also be raised for Lyotard’s account of the postmodern condition), and the reasons it has been taken up and used so readily within the contemporary social sciences, and social theory especially.  For the time being, let us pursue a little further the hypothesis that the externalization of knowledge onto computers, databases, servers and the cloud is involved in the constitution of a different form of both society and human subject. 

To what extent do such developments cast the so-called computational turn in the humanities in a rather different light to the celebratory data-fetishism that has come to dominate this rapidly emerging field of late? Is the direct, practical use of techniques and methodologies drawn from computer science and fields related to it here too helping to produce a major alteration in the status and nature of knowledge, and indeed the human subject? I’m thinking not just of the use of tools such as Anthologize,  Delicious, Juxta, Mendeley, Pliny, Prezi and Zotero to structure and disseminate scholarship and learning in the humanities, but also of the generation of dynamic maps of large humanities data sets, and employment of algorithmic techniques to search for and identify patterns in literary, cultural and filmic texts,  as well as the way in which the interactive nature of much digital technology is enabling user data regarding people’s creative activities with this media to be captured, mined and analyzed by humanities scholars.

To be sure, in what seems to be almost the reverse of the situation we saw Lyotard describe, many of those in the humanities - including some of the field’s most radical thinkers - do now appear to be looking increasingly to science (and technology and mathematics) to provide their research with a degree of legitimacy. Witness Franco ‘Bifo’ Berardi’s appeal to ‘the history of modern chemistry on the one hand, and the most recent cognitive theories on the other’, for confirmation of the Compositionist philosophical hypothesis in his 2009 book, The Soul at Work: ‘There is no object, no existent, and no person: only aggregates, temporary atomic compositions, figures that the human eye perceives as stable but that are indeed mutational, transient, frayed and indefinable’. It is this hypothesis, derived from Democritus, that Bifo sees as underpinning the methods of both the Schizoanalysis of Deleuze and Guattari, and the Italian Autonomist theory, on which his own Compositionist philosophy is based. It is interesting however that Bifo should now feel the need to turn, albeit briefly and almost in passing, to science to underpin and confirm it.

Can this turn toward the sciences (if there has indeed been such a turn, which is by no means certain) be regarded as a response on the part of the humanities to the perceived lack of credibility, if not obsolescence, of their metanarratives of legitimation: the life of the spirit and the Enlightenment, but also Marxism, psychoanalysis and so forth? Indeed, are the sciences today to be regarded as answering many humanities questions more convincingly than the humanities themselves?

While ideas of this kind appear just that little bit too neat and symmetrical to be entirely convincing, this so-called ‘scientific turn’ in the humanities has been attributed by some to a crisis of confidence. It is a crisis regarded as having been brought about, if not by the lack of credibility of the humanities’ metanarratives of legitimation exactly, then at least in part by the ‘imperious attitude’ of the sciences. This attitude has led the latter to colonize the humanists’ space in the form of biomedicine, neuroscience, theories of cognition and so on.  Is the turn toward computing just the latest manifestation of, and response to, this crisis of confidence in the humanities?

Can we go even further and ask: is it evidence that certain parts of the humanities are attempting to increase their connection to society; and to the instrumentality and functionality of society especially? Can it merely be a coincidence that such a turn toward computing is gaining momentum at a time when the likes of the UK government is emphasizing the importance of the STMs and withdrawing support and funding for the humanities? Or is one of the reasons all this is happening now because the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? (Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibb shows, one can get funding from the likes of Google.  In fact, ‘last summer Google awarded $1 million to professors doing digital humanities research’.) 

To what extent, then, is the take up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information - deliverables? Following Federica Frabetti, can we even position the computational turn as an event created precisely to justify such a move on the part of certain elements within the humanities?  And does this mean that, if we don’t simply want to go along with the current movement away from what remains resistant to a general culture of measurement and calculation, and toward a concern to legitimate power and control by optimizing the system’s efficiency, we would be better off using a different term other than ‘digital humanities’? After all, as Frabetti points out, the idea of a computational turn implies that the humanities, thanks to the development of a new generation of powerful computers and digital tools, have somehow become digital, or are in the process of becoming digital, or at least coming to terms with the digital and computing.  Yet what I am attempting to show here by drawing on the philosophy of Lyotard and others, is that the digital is not something that can now be added to the humanities - for the simple reason that the (supposedly pre-digital) humanities can be seen to have had an understanding of, and engagement with, computing and the digital for some time now.