Performance-as-Research – A Reply to Luk Vaes

The Belgian pianist and scholar Luk Vaes has published a new blog post following two previous ones (here and here) responding both to the announcement of the debate on composition, performance and research on November 25th, and also to my article on the subject which is one of the texts for discussion there. I would like to publish here a response I have also added in the comments section of his post.

I do believe that Vaes, coming from a context of ‘artistic research’ rather than ‘practice-as-research’, is inclined towards too-fixed and narrow (and sometimes counterproductive) conceptions of research, at least implicitly. But I would also like to ask him whether he thinks practice-based outputs alone can suffice as research (ever?) or only with substantial written documentation? This debate has recurred often in wider literature on practice-as-research. And should these standards be applied differently to composition and performance?

The major objection given to documentation of practice is that research councils, academic promotion panels and others simply read that and do not bother to listen to/watch/etc the actual artistic work involved. This is a very real danger, especially when (outside of the REF) non-artists may be involved with the decisions. If documentation is required as well as artistic output, only a mode of judging which looks at both in detail could ever be satisfactory.



Luk Vaes writes:

I don’t agree with the jump from “opening up research questions” to actually being “research as a result”, nor do I think performance-based research should be considered on the same level (much legitimate systematic musicology – e.g. performance science – is performance-based or -led). I more than agree with that “additional demand”, as I find the explication of the research to be essential to its identity. As long as it is impossible for me to assess how (and how exactly) Ian has learned from Gieseking, Cziffra, et all., how exactly this has opened up new questions, how exactly this worked in a certain way (and not in perhaps certain other ways), what the conclusions are, etc., it is not worth it to use a new term to describe the age-old process he described. Research is a collective effort, with peer-interaction as a fundamental, i.e. peer-based and peer-oriented. Contrary to matters of composition, I can consider myself to be a peer of Ian’s, but, from his performances, I cannot tell any of the above to a level that informs me about his research.

As far as the first ‘leap’ is concerned, let me put the ‘research as a result’ comment in context:

But my approach is far from uncommon, and in this sense the articulation of practice in research terms is a positive and productive activity. It may be less spectacular than some of the wilder fringes of theatre and visual performance – such as Lee Miller and Joanne “Bob” Whalley’s joint PhD project, collecting of urine-filled bottles on the M6, replacing them with other detritus, renewing their wedding vows in a service station, then grounding this in the thought of Deleuze and Guattari, Bakhtin, dialogism, heteroglossia and semiotic multi-accentuality, deliberately framed in such a way as to frustrate Popper’s criteria of falsifiability – but is no less ‘research’ as a result.

The only point here is that whilst critical engagement with aesthetic, technical and interpretive questions doesn’t look as spectacular as the above, that doesn’t mean such work should not equally warrant being considered research.

As far as practice-based research is concerned, this is a bit of a nebulous term, for sure; I had in mind in this context written work produced by practitioners relating to their own work, rather than just any musicology dealing with performance. But we need a more specific term for this, for which the term practice-as-research is often used in my view erroneously.

As far as needing to understand how the engagement with Gieseking, Cziffra, or whoever impacts upon the final output (which might be in the form of a marked negation of aspects of this playing, or adoption and mediation of aspects which are far from obvious), well a piece of written work might be able to explain this, and such research is useful, but one might say exactly the same about being able to know how complex row transformations impact upon a composition when these are not perceptible without guidance. Note that earlier in my article I say:

At a REF panel discussion in February 2015, it was argued that the REF can entail a large amount of financial support for innovative practice-based work. There remain various obstacles towards achieving this (not least from individual institutions inclined to downgrade practice-based work in general), but it is not an unrealistic goal. If this requires practitioners to articulate ways in which their work has value and consequences not just in and of itself but also to others as a contribution to knowledge, this seems a fair price to pay.

but also:

Nor does musical practice become research simply by virtue of being accompanied by a programme note, which funding and other committees can look at and ignore the practical work.

and also:

I have some doubts as to whether some composition- and performance-based PhDs, especially those not even requiring a written component, are really equivalent in terms of effort, depth and rigour with the more conventional types.

Others will argue that simply the final output should suffice to demonstrate the quality of the research; I am not going that far, though do see the danger of the documentation of the process being judged practically independently of the result. To convince you that engagement with various other musicians’ work, in a myriad different ways, has significantly informed my practice, is something which I do not think would be difficult given sufficient space (certainly more than the 300 words required by the REF). This is not a reflection on the quality of the performance, but whether the process involved in its creation can fairly be judged as research.

I bring this up primarily, though, because composers are frequently able simply to submit their compositions with a 300 word statements, and that suffices to justify their work as research, in a way which is much rarer for performers. Numerous composers working in UK university departments produce only compositions, no written work, whilst there are significant differences in terms of expectations made by departments upon performers in this respect. I think this is a major inequity, and also that these debates in a musical context are too heavily dominated by composers.

What we are sometimes left with is that only the most obvious (and often extremely basic) aspects of performance are considered ‘research’ – employing a few extended techniques, using a slightly new type of instrument, playing some unusual rhythms, and so on. The dutiful performer-scholar will play this music and write up a short amount of pragmatic ‘how to do it’ information, and leave the much more complex issues of interpretation, style, genre, and aesthetics to a handful of over-general and meaningless platitudes (‘it is important to phrase this music well’, ‘it should still be beautiful’, ‘one should make it sound like a real piece of music’, and so on). What I am trying to argue is that the whole business of fashioning and crafting strategies for these latter aspects more deserves to be considered research than simply writing something like ‘I tried playing this sonority by using this object to stop the string. I played it to the composer like that, and then with another object, and they preferred the first, so we went with that.’ This latter is really just a type of skills training rather than critical research.