Rahel Aima on Gulf Futurism:
Gulf futurism, as I understand it, is conceptualised in the mould of Marinetti’s Italian futurism, and inherits many of the same touchstones. All of its seductiveness: sun, sand, and solar-sintered glassy desolation of the Arabian gulf at the extreme promontory of the millennia. All the beautiful/callous brutality, all the proto-fascism of a society that privileges success and speed over human life.
At base, Gulf futurism is “plus ça change futurism,” all wrapped up in what a friend has dubbed “flying force fields of neo-Arabness.” It’s not imagining a future so much as mapping shards of future detritus—imagery strongly defined-as-future by Western culture, as you put it—in the present. It’s an aesthetic scaffolding that reproduces all the injustices, structural degradation and racial erasures of the present.
Anne Galloway has a fascinating essay up on Ethnography Matters:
By way of background, I think all ethnographers are taught that Ursula Le Guin’s father was the famous American anthropologist Alfred Kroeber, and her mother an accomplished writer, so it comes as no surprise to us that her culturally rich stories are so capable of rendering the strange as familiar, and the familiar as strange. Le Guin has written many excellent essays, but the one that currently preoccupies my research and gives me my definition of “fantastic” is called “The Critics, the Monsters, and the Fantasists.” Ostensibly written in defense of fantasy narratives, it is also a brilliant critique of the kind of modernist realism that informs so much of today’s fiction, design and design fiction. Le Guin reminds us that not only is our distinction between factual and fictional narrative historically quite recent, but that we’ve forgotten how to even read fantasy. Imagine, she suggests, if we were to “judge modern realist fiction by the standards of fantasy.” We would find ourselves, she continues, faced with “a narrow focus on daily details of contemporary human affairs; trapped in representationalism, suffocatingly unimaginative, frequently trivial, and ominously anthropocentric.” (When I first read this I was sure she was describing most of the social and cultural research I had to read for my postgraduate studies, not realist fiction!)
Le Guin’s essay challenges us to probe what exists beyond realism, beyond anthropocentrism, and to carefully question what this space can and cannot bring forth. … Feminist critiques of science have long demonstrated that scientific rationality is connected to practices and values of modern, affluent, male-dominated, Western culture. Indeed, ‘good’ science fiction is often predicated on its ability to be scientifically plausible—just as ‘good’ ethnographic fiction is meant to be culturally plausible. To escape, or exceed, these ways of thinking and doing, then, requires the sort of critique seen in feminist science fiction and the incredible, unruly, premodern sensibility that infuses the fantastic.
LeGuin (and Galloway) seem to be saying that it’s not just enough to call into question “realist” narratives that are “anthropocentric” and “trapped in representationalism.” If that’s our first stop then we still find ourselves in a cul-de-sac defined by “modern, affluent, male-dominated, Western culture.” Imagining an alternative requires the resources of the “fantastic” with its “unruly, premodern sensibility.”
Galloway later goes on to say that the speculative approach does not involve providing possible alternatives. If the goal of speculative genres (architecture, design, literature, science fiction, etc.) is not to offer an alternative, what might it then be? Towards the end of the essay, Galloway writes: “although fantastic ethnography and speculative design don’t have to derive their plausibility from realism or rationality, they should move people—because the space of the fantastic and the speculative is, after all, affective space, or the space of potential.”
Ok. But doesn’t all of this also describe what we usually think of as the “aesthetic” in general? What I find interesting here is that Galloway’s argument is not really about literature (speculative or realistic) at all. It’s about design. Since the 19th century, aesthetics (autonomous, decadent, useless) and design (instrumental, bourgeois and utilitarian) have played at an intricate counterpoint. We’re used to the arts being criticized for not being sufficiently useful or relevant. Now, finally design is being criticized for not being sufficiently useless.
The Book Globe takes the “setting” for every book that has won or been shortlisted for the Man Booker prize and plugs it in to a Google Map. Interesting to think about what constitutes a “setting” in this context, of course.
image via The Guardian
Chis Baraniuk at The Machine Starts writes:
Recently Reuters listed several apps which aim to challenge the idea that digital media is eternal. Yet none of this is an embrace of the true glitch. Again, we are talking about designing experiences which circumvent the established norms of this technology in order to satisfy specific use cases.
And that’s the power that we have with digital media. We can design obsolescence if we wish, we may pre-program decay when we want – but until recently there was no reason to. As time goes on, however, we will come up with more excuses to do precisely that. Glitch art is just the beginning of our culture leaning towards a world in which the permanence of the digital is no longer assumed. The mangled JPEGs and ruptured codecs which frustrated us in the past and which inspire artists of the present will be demanded by consumers of the future.
These are some fascinating reflections about the temporality of digital media vis-à-vis the glitch. It goes without saying that planned obsolescence has been a part of the experience of digital media from the beginning. And anyone who has experienced a hard drive failure or saved anything to a floppy disk– ever– knows that digital artifacts are, contrary to the hype, highly impermanent. So it’s arguable to what extent impermanence has always been a feature and not a bug.
That said, the rise of temporary social media does seem to mark a shift of terrain in which what was formerly seen as the exception– the glitch– is now the rule. (Cf., Paul Virilio on the generalized accident.)
I’m just getting around to reading Rebecca Solnit’s provocative essay making allusion to Virginia Woolf:
In or around June 1995 human character changed again. Or rather, it began to undergo a metamorphosis that is still not complete, but is profound – and troubling, not least because it is hardly noted. When I think about, say, 1995, or whenever the last moment was before most of us were on the internet and had mobile phones, it seems like a hundred years ago. Letters came once a day, predictably, in the hands of the postal carrier. News came in three flavours – radio, television, print – and at appointed hours. Some of us even had a newspaper delivered every morning.
There’s a lot of connecting the dots to do when it comes to this. It doesn’t seem like too much of a stretch to say that just as in 1910 a revolution in media and communications technologies is currently in the midst of re-shuffling much of the potential range of human experience. But that’s not entirely what’s at issue here. One of the keys to Woolf’s proclamation that “on or about December 1910 human character changed” is the use of the word character. Woolf was thinking about social change as a literary question, as posing a challenge to the conventional means by which the novel represents character.
This challenge provides the point of departure for Woolf’s famous essay “Mr. Bennett and Mrs. Brown” in which she speculates about the means available to a modern novelist such as Arnold Bennett to represent the character of a fictional woman, Mrs. Brown. So the question is: who are Mr. Bennett and Mrs. Brown today? What kind of challenge to “human character” is posed by Facebook and Twitter?
Once again, Melissa Gregg:
The difference with making today is the source of the cultural and financial investment, namely Silicon Valley. The notion that ‘everyone is a maker’ keeps the hacker ethos alive while drawing on the more recent elevation of ‘you’ as the active pro-sumer. In addition, venture capital and media coverage translate to serious corporate and institutional resources. If ‘make do and mend’ served the propaganda needs of a state-sanctioned war machine, it was ideological state apparatuses (education primarily) that determined the curriculum and gender norms for home economics vs. trade classes.
Today’s maker ‘movement’ is an evangelist’s response to the deficiencies of the state. The standardization of schooling to meet performance metrics has led to a drain on the manual and creative aspects of education, such that learning is limited to knowledge that can be tested. This is one way that data exerts agency on institutions. Metrics matter more than content. By contrast, maker kits and a culture of making beyond the classroom each offer a solution to pedagogical anemia, a set of tools for an emerging trade.
The broader impact of off-shoring in the US economy has turned manufacturing into a problem: when it exists at all, (non-creative) making is outsourced to the so-called developing world.
Lots to say about this.
Idealizing and romanticizing “making” is nothing new, from Plato onward. With Marx, this kind of thinking took on an especially political tint, and after Ruskin an aesthetic one. Since the 19th century, we’ve had DIY movements of various kinds from Arts and Crafts to Punk. That said, the question of why now is a good one and these are compelling points, but the idea that valorizing maker culture is only a neoliberal complement to austerity might be overlooking the long history of pastoralizing around “making” including not least Richard Sennett’s beautiful but extremely Ruskinian The Craftsman.