It seems we are now at the point where the great theoretical experiment with identity politics and critical studies that has been conducted in our universities for the last couple of decades is finally beginning to bear some empirical fruit…and the results are, as this report shows, troubling.
I’ll admit I’m somewhat late to this party (story of my life), as I only really began to have serious concerns about the harm the humanities was doing to students about five or six years ago, when I started work on posthumanism and the posthumanities. Some more prescient than me saw this coming a long way back and were sneered at for their trouble; but it suited many, many more to turn a blind eye and build their careers.
What’s been decisive for me on this topic is the emergence of evidence showing that many of the so-called ‘critical theories’ (and the dogma they foster) peddled in humanities’ programs have actually been harmful to the humanities. The great theoretical experiment has failed and it seems to me irresponsible to keep pushing the ideas that drove it: instead, we need to figure out what to stop things getting worse.
The problems facing literary studies as a result of this failed experiment are particularly bad, according to the MLA study linked above:
Jobs in English are down 10.7 percent from last year.
Jobs in foreign languages are down 12 percent from last year.
English had 851 jobs listed last year, which is lower than any year on the chart (which goes back to 1975-76).
Foreign languages came in at 808, which is also lower than any other year listed.
Both areas are well below the numbers for jobs in the year after the recession hit, 22 percent fewer in English and 21 percent fewer in foreign languages.
This has been a problem long in the making for literary studies: student enrollments are dropping (why study something that will only indebt you to the tune of thousands of dollars and won’t even qualify you for a job at Starbucks?), high-quality hires are not being made (and in some cases I am familiar with, mediocre faculty members don’t want high-achievers around to highlight their lack of productivity; academic politics can be as catty as Mean Girls) and course offerings so niche that they are utterly divorced from reality (these are frequently driven by faculty’s antiquarian and activist research agendas); the list goes on, unfortunately.
And, as Jonathan Haidt recently pointed out, the problems caused by identity politics and ‘critical thinking’ are no longer confined to the universities; they are now also showing up in high schools, fuelled in no small part by teachers who are themselves the products of Education programs saturated by ‘critical thinking’ dogma. So, ill-prepared students will be leaving high school without necessary basic skills and arriving at universities armed with tools that are designed for activism, not learning, where they will encounter staff who don’t think of teaching as their primary job (most of whom have never even been trained to teach).
Things may be bad now, but they’re likely only going to get a whole lot worse when the broken parts overlap to form one big dysfunctional system.
If you’ve been unsettled by the recent shameful display at Wilfrid Laurier University, then this is like a good place to find ideas about how to break up the horrifying and destructive groupthink that is gripping our universities and, frankly, ruining students’ future prospects. It’s a wide-ranging and utterly engaging discussion between Jordan Peterson and Jonathan Haidt on what’s going wrong at universities and what needs to be done to start setting things right.
Most recently, the media reported Lindsay Shepherd, a grad student and teaching assistant in Laurier’s communications program, ran afoul of her university bosses while instructing a first-year class. She showed a clip of a debate between U of T professors Jordan Peterson and Nicholas Matte. The debate, which previously aired on public TV, had Peterson explaining his objections to the use of non-gendered pronouns while Matte argued in favour.
Shepherd showed a three-minute clip to spark discussion but it seems someone in class complained that the ideas of Peterson made them feel unsafe. Shepherd found herself called before a hostile tribunal of her thesis adviser, the program chair, and the manager of the university’s Gendered and Sexual Violence Prevention and Support Office.
Quotes from the meeting, which Shepherd recorded, show that she was subjected to a barrage of accusations as her motives and character were called into question. She was ultimately told she was not allowed to expose students to views like those of Peterson because, according to her thesis adviser, discussions that create “an unsafe learning environment” are “not up for debate.”
You can listen to excerpts from the recording Lindsay made here. And, to her credit, Lindsay holds up quite well, considering the tone of her, er, inquisitors.
Now, if you’re thinking that this all sounds uncomfortably reminiscent of a Struggle Session… well, I’d agree with you.
I know we all have bad days. After all, we’re only humans trying to do our best; we can be tired or irritable some days due to stresses and strains, and we can fall short of being the best versions of ourselves. But were both these professors having such bad days that neither thought to think that maybe this wasn’t a terribly good–never mind constructive or fair–way to treat someone who said they took a neutral stance on the video and is, after all, one of their students?
And, realistically, is what Jordan Peterson has to say really dangerous? I mean, come now, are the comparisons to Hitler at all justified? Peterson is as hard on Hitlerism and Fascism as he is on Communism in his talks; is that perhaps the issue for these inquisitors? To my ears Peterson’s work always sounds well-intentioned: part well-informed psychology and part knock-off René Girard, a French philosopher of violence whose work I found quite useful when I wrote this several years ago. And surely we could all spend a bit more time cleaning our rooms?
But I’d also ask is this really what Canadians–who, after all, are footing a serious slice of the bill–want (deserve?) to see happening in their universities? Should Canadian universities be ‘safe spaces’ where certain beliefs and theories simply must not be challenged? Places where anyone who even tries to question those beliefs and theories with evidence is immediately characterized as some sort of threatening bigot? Or should Canadian universities be places where academic freedom is enshrined? Places where difficult and complex issues can be honestly discussed and openly debated, using the best evidence available, to help us ascertain the truth?
I ask because this happened a very short time ago at UBC.
‘Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience. They may be more likely to go to Heaven yet at the same time likelier to make a Hell of earth. This very kindness stings with intolerable insult. To be “cured” against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.’
― C.S. Lewis, God in the Dock: Essays on Theology (Making of Modern Theology).
Further to these blog posts, I have been thinking a bit more about the harm the ideological academic bubble does students and universities.
The ideological academic bubble functions a lot like the Google search bubble, where any new information is pre-filtered according to previous searches and preferences. The result is a sort of echo-chamber where only messages that support a particular viewpoint make it past the filter, creating a feedback loop, where nothing especially new or challenging can get through.
It is perhaps too easy to dismiss this ideological academic filtering as yet another incarnation of ‘ivory tower’ criticism that is often directed at universities. What makes this filtering different, however, is that it has the real potential to cut students off from future prospects, goals and careers, thereby narrowing their horizons, rather than expanding them. Once the filtering becomes especially strong—as it has become in the humanities and much of the social sciences—it can cut students off from actual reality, producing out-of-touch graduates with notions and attitudes shaped by ideologies ‘backed-up’ by cherry-picked data/examples and which seem to explain everything. As I’ve suggested here, such graduates are vulnerable to the Dunning-Kruger effect.
And this filtering is not just harmful to the students; an educational system that has lost touch with reality plays into the hands of budget-conscious administrators and politicians who may see defunding certain departments, faculties and perhaps even entire universities as a convenient answer to the mounting budgetary crises caused by aging baby-boomers.
But, it is easy to be negative. What can be done to address academic ideological filtering?
As I currently see it, popping the academic ideological bubble involves explicitly addressing the origins of the filtering, which owe much to the ‘split’ between the humanities and the sciences, which in turn can be said to break down along the subjective/objective distinction. The humanities, as well as certain strands of the social and human sciences, tend to enforce the traditional subjective/objective split by: 1) invoking ‘phenomenology,’ a philosophico-theoretical construct that privileges subjective experience over objective reality; 2) relying on the theory of the blank slate, which holds that humans are born as ‘blank slates,’ and that all their differences—sexual, racial, and so on—are the result of social and cultural factors and pressures, rather than anything innate; and 3) employing symptomatic reading, which polices cultural artifacts, science, institutions, etc., looking for ‘symptoms’ of ideological bias. And, when these three things work in combination, they can be very dangerous because they attempt to rob the ground out from under any appeal to objective reality and truth.
So, removing the filters that give rise to the academic ideological bubble would involve dislocating phenomenology, ‘blank-slatism’ and symptomatic reading. And by far the best way to loosen the grip of this ideology on the humanities is to turn back to objective reality and the tools that have served humanity very well for the last couple of hundred years: the data, tools and methods of the sciences. In short, there must be an attempt made to help those in the humanities and the social sciences become more literate in science. The point, as I’ve argued in Posthumanism: A Guide for the Perplexed, is to have the humanities and the sciences not just talk to each other more, but to have meaningful conversations. That said, I’ll add that the sciences have more to teach the humanities than vice versa: the sciences are not so vulnerable to being hamstrung by subjectivity and ideology simply because they are driven by a desire for minimizing subjective bias and thus learning the truth about objective reality. Science and technology studies, of the sort generally found in humanities and social science programs, don’t offer anything much beyond criticizing science and caricaturing the scientific method as tools of oppression; as such, they are worse than useless. Scientists themselves have been by far the best and most effective critics of science because they actually understand how science works; I am recommending we in the humanities listen carefully to them.
I’d cite here the work of people like Andy Clark, David McFarland, Alan Liu and Franco Moretti as good examples of the type of cross-disciplinary mixing I’d really like to see more of.
The following scenario—in part inspired by the lapse of thinking and moral panic triggered by the now infamous ‘Google Memo’—will hopefully illustrate how even a very basic science literacy—in this case statistics—would be of benefit to those in the humanities and certain strands of the social sciences.
Let’s imagine that person A thinks that person B is less suited to a job that requires mathematical reasoning than person C, on the grounds that person B is female, and person C male. Let’s also imagine that person A cites statistical data to back-up their decision, which shows that while women are typically better at calculation, men are typically better than women at mathematical reasoning.
(The graphs used here are not the actual graphs from the studies that show these trends; I’ve taken them from here and here, and I’m using them simply to illustrate several points about interpreting statistics correctly.)
However, person A’s simply pointing to such a graph would not be sufficient evidence for saying that person B is less suited to the job in question than person C; this is where a more thorough knowledge of what the data is actually saying is required. The graph does indeed show that men (purple curve) tend to be better than women (green graph) at mathematical reasoning; however, it is also easy to see that there is a huge overlap of men and women. In other words, the majority of men and women are pretty evenly matched in terms of mathematical reasoning, and this pattern holds for most measurements of the abilities of the sexes. Indeed, it is this overlap that shows how person A’s snap judgement about person B’s ability based on the data is actually more likely to be wrong; person A has committed the ecological fallacy, an error illustrated neatly by the image at the top of this post. So, a more thorough knowledge of the statistical data can actually help a biased individual overcome that bias. This also means that the only way of truly determining which candidate has more merit is to treat both as individuals, and test each on an individual scale.
At the same time, the graph does show that there is a difference between men and women, which should not be ignored: looking more closely at the tails of the curves shows us that men are more likely to be found at the highest levels of ability as well as at the lowest levels of ability, a pattern which tends to show up in most measurements of the sexes. This is because, as Steven Pinker shows, men tend to have more variability than women, a situation he bluntly sums up as ‘more prodigies, more idiots.’ This, of course, is absolutely not to say that women can’t be idiots or prodigies: indeed, the tails of the green curve show very clearly that they can be. It’s just that there will be less of them: this is why we might reasonably expect men tend to be ‘over-represented’ in the categories of ‘prodigy’ and ‘idiot.’
As the above scenario shows, an individual with access to solid data can be wrong about things. At the same time, an individual without any such data is perhaps even more easily led astray by their subjective experience of something, which very often does not correspond to objective reality; this is why subjective experiences are usually not regarded as being sufficient to meet the threshold for reliable knowledge or data about the world. However, the solution to both cases of error lies in education and critical thinking; and by far the best tool we have for rooting out such errors is the scientific method.
Having social and political systems informed and guided by data and individualism are important because they respect and enforce an individual’s rights and freedoms; such systems will have no need to coerce individuals into acting or behaving in particular ways, so long as they obey the law: murder, for example, or financial fraud, cannot be tolerated because they violate the freedom of others. Ignoring data and disrespecting the freedoms of individuals corrupts such systems by cutting them off from that which ensures their fair operation.
I’m guessing (hoping?) that most people reading this would surely not endorse treating someone differently based on their sex or skin-colour (and that cuts all ways). And, hopefully, the above scenario also shows how the truth about ability is not at all incompatible with treating people fairly as individuals and not judging them as members representative of a group. As I’ve tried to show, treating someone on the basis of their group identity will usually mean getting all sorts of things wrong most of the time. Treating each individual as an individual and offering him or her the opportunity to pursue whatever path is of interest to them is best, because the data backs doing just that. However, according to this study—and otherslike it—evidence shows that individuals having the freedom to choose which path to follow actually underscores biology-based sex differences: ‘evidence suggests gender differences in most aspects of personality—Big Five traits, Dark Triad traits, self-esteem, subjective well-being, depression and values—are conspicuously larger in cultures with more egalitarian gender roles, gender socialization and sociopolitical gender equity.’ The authors of the study go on to state: ‘Social role theory appears inadequate for explaining some of the observed cultural variations in men’s and women’s personalities.’ Nature, it would seem, is not so easily erased; indeed, why would we want to do such a thing?
Now, someone who believes in equality of outcome—rather than the equality of opportunity I’ve been discussing so far—is more than likely to dispute any data that suggests there are important differences between men and women on the grounds that almost all such differences come down to social or cultural factors. This is the second major factor in producing the ideological academic filter: blank-slatism. Blank-slatism, although it often goes undeclared, is a very common doctrine in the humanities and social sciences. Since it holds that society and culture are the primary drivers of difference and inequality, blank-slatism tends to separate people on the basis of the group to which they superficially belong and judges them on that basis. This judgement most often takes the form of a hierarchy of ‘privileges,’ which is then used to explain (and explain away) the differences identified by scientific and statistical data. Effectively, blank-slatism takes each individual to be representative of the group to which they superficially belong; so, instead of seeing overlaps in data, blank-statism reduces the individual to the group, and what results is a world with no overlaps, just rigid ‘identities.’ (This statistically illiterate process is precisely the error in thinking that drives identity politics.) Further, certain group members’ subjective experiences—usually those deemed the ‘least privileged’—are taken to be infallible, an assumption shored up by phenomenology; this formulation of subjective experience differs from the fallible individuality discussed above insofar as it can never be ‘wrong’ or ‘mistaken’ or ‘biased,’ and is inseparable from group identity. Typically (and unfortunately), any disagreement quickly gets labelled ‘racist’ or ‘sexist,’ leaving little or no ground for actual discussion to take place on.
Blank-statism must try to account for discrepancies and difference in outcome. However, as Jonathan Haidt has pointed out, the problem is that blank-slatism tends to mistake correlation for causation, another basic error in thinking that can easily be fixed by the humanities and strands of the social sciences becoming more scientifically literate. Correlation of x and y does not mean that x causes y: it may be that x causes y, but it may also be that y causes x, or z causes both x and y. Blank-slatism seizes too soon on the correlation, seeing it solely in terms of causality–e.g., ‘differences in outcome must be caused by systemic racism or sexism, etc.’–and simply stops there. In other words, having found its preferred cause, blank-slatism stops, satisfied, its thesis confirmed; meanwhile, real research and investigation continues to dig for the actual causes of the outcomes. And it is only after further research and investigation has been conducted that scientific data emerges which suggests that cross-cultural and enduring biological factors may actually be affecting outcomes.
In blank-slatism, all differences, since they are taken to be the result of social and cultural factors, are theoretically erasable. However, by adopting this position, things like data or having personal freedom of choice, since they stand in the way of the erasure, become ‘problematic’; as such, they require constant ‘interrogation,’ ‘correction’ and ‘intervention.’ So, if the reason for oppression is deemed to be x, then social and cultural factors must be actively policed for symptoms of wrongthink that is propagating x; and individuals expressing wrongthink must be relentlessly pursued and shamed. This shows how integral ‘symptomatic reading’ is to reinforcing the ideological academic filter.
It should not take much to see how putting blank-slatism into practice is necessarily authoritarian, not libertarian. In fact, this is one of the major reasons why I no longer think that the left/right axis is useful for understanding the politics of the academic ideological bubble: it is perhaps better understood using the authoritarian/libertarian axis.
Yes, I know that it’s a hoary old chestnut that has been cited here, there and everywhere by all sorts of people, some of whom I might not want to be counted among; nonetheless, de Tocqueville’s thinking offers a useful lens for viewing what speech-and-offence codes on university campuses are doing to both their students and their freedom.
Here’s the snippet I’ve been pondering the most:
Above this race of men stands an immense and tutelary power, which takes upon itself alone to secure their gratifications and to watch over their fate. That power is absolute, minute, regular, provident, and mild. It would be like the authority of a parent if, like that authority, its object was to prepare men for manhood; but it seeks, on the contrary, to keep them in perpetual childhood: it is well content that the people should rejoice, provided they think of nothing but rejoicing. For their happiness such a government willingly labors, but it chooses to be the sole agent and the only arbiter of that happiness; it provides for their security, foresees and supplies their necessities, facilitates their pleasures, manages their principal concerns, directs their industry, regulates the descent of property, and subdivides their inheritances: what remains, but to spare them all the care of thinking and all the trouble of living?
Thus it every day renders the exercise of the free agency of man less useful and less frequent; it circumscribes the will within a narrower range and gradually robs a man of all the uses of himself. The principle of equality has prepared men for these things; it has predisposed men to endure them and often to look on them as benefits.
After having thus successively taken each member of the community in its powerful grasp and fashioned him at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate, to rise above the crowd. The will of man is not shattered, but softened, bent, and guided; men are seldom forced by it to act, but they are constantly restrained from acting. Such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to nothing better than a flock of timid and industrious animals, of which the government is the shepherd.
Beautiful writing, isn’t it? That’s even in translation.
De Tocqueville is, of course, talking about the state, and Rowan University is just a university. Indeed. But what connects them for me is the process whereby individuals are infantilized and their freedom eroded by an institution that has authority over them. Rowan’s policies on ‘microaggressions’ boil down to an institutional interference in how grown people talk to each other; and, as de Tocqueville notes, the process of infantilization is carried out through a benign, mild authority that is composed of ‘a network of small complicated rules, minute and uniform, through which the most original minds and the most energetic characters cannot penetrate.’ Indeed, such an authoritative network of ‘microrules’ seems designed to ensnare, rather than free, individuals through the active micromanagement of small, everyday interactions. De Tocqueville also reminds us that ‘[s]ubjection in minor affairs breaks out every day and is felt by the whole community indiscriminately. It does not drive men to resistance, but it crosses them at every turn, till they are led to surrender the exercise of their own will.’ It’s hard not see the same grinding mechanism at work in Rowan’s policies, which amount to imposing a one-sided model of language wherein everyday conversations must take place in accordance with rules that both presume the guilt of the one accused of causing offence and prevent the same from refuting the charges. Such a model of communication clearly has nothing to do with dialogue, no matter what is claimed; it rather resembles the controlling one-way flow of information used by the Senders in William S. Burroughs’s Naked Lunch. It is designed to shut people up.
De Tocqueville makes it clear that the old words like ‘tyranny’ and ‘despotism’ don’t quite capture what he wants to isolate for scrutiny: this network does not ‘tyrannize’ per se; nor should we expect it to resemble a ‘despotic’ regime, complete with a dear leader wearing a vaguely militaristic uniform surrounded by laughing-but-terrified minions. On the contrary, this ‘despotism’ doesn’t look much like despotism at all. As de Tocqueville notes earlier in the same chapter, this ‘despotism’ even looks mild at first glance: ‘it would be more extensive and more mild; it would degrade men without tormenting them.’ Therein lies the network’s insidiousness: it reaches into every corner of your existence, making you dependent upon it, unable to function without it, as it quietly degrades you, reducing you to the state of being a helpless, perpetual, child, utterly in grip of its mild power.
Can such a form of authority, which disregards personal freedoms as it ‘compresses, enervates, extinguishes, and stupefies’ people, reducing them ‘to nothing better than a flock of timid and industrious animals,’ actually be considered a good thing? How can such a form of authority do much besides stunt its charges? It certainly can’t do much to equip them with the skills required to stand on their own two feet like reasoning and free individuals.
And what if both the university and the state want the same thing?
While browsing the Campus Reform website, I came across a report that Rowan University had just ‘published a guide on “Interrupting Microaggressions” with strategies for “calling out” those who advocate concepts like “color blindness” and “meritocracy.”‘ The digital bumf on the Rowan website makes it clear that the notion of ‘microaggression’ is bound up with identity politics.
One example of a ‘microaggression’ given in the Campus Reform piece is the statement, ‘Everyone can succeed in this country, if they work hard enough.’ Now, one could certainly question that statement by pointing out that not everyone who works hard actually succeeds, no matter what their background. People fail at stuff all the time, and if there were no ‘failures’ then there would be no ‘successes.’ We could follow up by asking more interesting questions like, ‘How does the speaker measure “success”?’ or ‘Has success to do with money, personal happiness, health, etc.?’ At the same time, it’s obvious that the statement is true: working hard is a necessary ingredient for any type of success. For example, doing well in a class at university requires keeping up with the assigned reading, engaging with the material, getting assignments done on time, revising for exams, and so on; all of those activities require dedicated effort, careful concentration and efficient time management.
So, it would appear that there is both truth and shortcomings in the assertion; the point, however, is that there can be a rational conversation about it without the need for anyone to get offended.
But, one might wonder, how is the supposed ‘microaggression’ actually aggressive? How, exactly, does it inflict damage or unpleasantness? Part of the answer to this question seems to depend upon the hearer: in another document also linked by the Campus Reform piece, we find the following: ‘If you are “called out” on your behavior… focus on the impact of your words or actions rather than your intent.’ So, it would seem that a ‘microaggression’ is aggressive if the hearer takes it that way, no matter how far from the truth that person may be. (I suspect that the sloppy notion that language is ‘violent’ is lurking here also; more about that in a future post.)
I find this formulation troubling for at least two reasons: first, it shifts discourse to the realm of emotion and emotional responses, which is fundamentally irrational; second, the underlying automatic assumption of guilt–not unlike ‘original sin’–on the part of the speaker. In other words, it doesn’t matter what idea the speaker was actually trying to communicate or express; what is important is how the words were taken by the hearer. A speaker is thus automatically assumed to be guilty—regardless of what was intended—once offence is taken; it also seems that a speaker cannot defend him- or herself against the charge of aggression.
So, in a nutshell, a speaker is automatically guilty of being aggressive wherever and whenever a hearer takes exception to their words, no matter what was intended; truth apparently does not matter. These policies seem to me dangerous precisely because they throw away the very useful model of language as a communicative tool, which one party uses to try to communicate an idea or thought to another party, and attempt to replace it with a one-sided, non-communicative model of language, where, regardless of what the speaker may have intended, the hearer alone gets to decide what was originally intended; to cap this off, in a butchering of logic, this model also makes the speaker responsible for that hearer’s (mis)interpretation. This paradoxical model of language robs the speaker of agency, judges him or her on how someone took their words, condemns him or her as guilty of aggression and leaves him or her with no means of defending themselves against the charge. Such policies, which are more reminiscent of the ‘re-education’ characteristic of show-trials and struggle sessions than proper education, herald the end of communicative language altogether on university campuses: if you are no longer sure whether or not what you say will trigger someone else no matter what you may have meant, are you more likely to keep trying to communicate or simply shut up? In a capricious and unpredictable environment, where even just the perception of offence can get you into hot water, it makes more sense to stay silent. Is this what we want to see from universities? I know I certainly don’t.
What troubles me most about university policies such as these is that we are actually witnessing the intrusion of institutional authority—here, the university—into individuals’ daily interactions, where that authority not only actively takes sides but seeks to prescribe how individuals should think and speak to one another. In other words, this situation seems to be about controlling speech through institutional interventions into individuals’ freedom of speech and the free exchange of ideas; and this is being done in universities by the universities themselves.
Reading about Rowan put me in mind of Roland Barthes’ (in)famous notion of ‘the death of the author.’ Barthes’ essay has commonly been taken to mean that the reader’s interpretation of a book is more important than what the author meant or intended; and, on the face of it, it’s tempting to say that Barthes’ notion is now being pressed into service by universities for the purpose of policing of speech and thought in the name of identity politics.
But ironies abound: the Rowan University policies are really more of an active distortion of what Barthes wrote (note, also, the delicious irony of how talking about what Barthes intended in an essay about the death of authorial intention is unavoidable–there are limits to how freely one may interpret a text). For Barthes, an author’s biographical or personal attributes—his or her political views, historical context, religion, ethnicity, psychology and so on—were not to be taken as binding when interpreting a text. This position is incompatible with an identity politics view of the world, where the genetic fallacy is never not in play. Indeed, Barthes’ own words would be enough to convict him of being a ‘microaggressor’ at Rowan, seeing that—according to the Campus Reform article—’When I look at you, I don’t see color,’ is also considered a ‘microaggression.’ Meanwhile, the prominent anti-authoritarian streak in Barthes’ essay is diametrically opposed to Rowan policies, which seem to be about creating and enshrining the very type of tyrannical authority over meaning that Barthes was trying to dislodge in his essay: at Rowan, readers or hearers get to assign a single, authoritative interpretation to every utterance—their own.
“A wide-ranging, informative and engagingly written book on the emergent field of posthuman studies”
—Stefan Herbrechter, Research Fellow, Coventry University, UK
My latest publication, a trade academic book from Bloomsbury Academic just came out at the start of March, 2017. It’s called Posthumanism: A Guide for the Perplexed and it’s available from Amazon, libraries and select bookshops worldwide. It explores how humans and humanism are changing through interactions with technology, science and medicine; it considers how advances in the fields of technology, science and medicine challenge and redraw the usual distinctions made between humans, animals and machines.
I must admit that this book had something of a difficult birth: the half-completed first draft of it was stolen, along with my computers and back-up drives, in a break-in at our house in Vancouver in October, 2013. I had started work on the book again, not a little disheartened, when, one night in February, 2014, the writing process encountered another setback: our house was flooded with sewage back-up during a power outage, which caused the sump-pumps to stop working. I also had the great good fortune to discover the sewage back-up by falling into it in the dark (not a night I’ll soon forget, let me tell you). We had to leave our house so the restoration could be completed, and so began a long odyssey of moving from temporary accommodation to temporary accommodation, while the world’s most incompetent crew of ‘restorers’ (thanks for nothing, Sevicemaster), spent the next six months doing a job we were assured would last only six weeks.
During this disruptive peripatetic existence, I was working on the book whenever I could. But the book had begun to change from what had been the half-completed stolen draft. I found that as I researched, I was becoming more and more skeptical of the ‘science and technology studies’ approach to technology and science favoured in the humanities; that same approach has also informed a lot of what has been written about posthumanism. It seemed to me that such an approach was severely limited in what it could say about science and technology because it could not properly get to grips with the science and technology it was purporting to criticize/analyze. In practice, such an approach is confined to making shallow comments about ‘representations’ of technology and science, which wind up making ill-informed (and often outlandish) claims about technoscience that cannot help but put off those who actually know something about how science and technology actually work. I resolved that I’d try to avoid such shortcomings by giving my reader a more technically informed overview of the technoscientific advances–such as gene editing and artificial intelligence–I discussed in the book. My approach also meant that I had to try and speak across the deep divisions that separate the sciences from the humanities: not an easy task.
Then, in October 2015, just as I was finishing up the complete draft for submission to the publisher, a now infamous fight over Halloween costumes at erupted at Yale. This fight, in its turn, set off a whole spate of outlandish ideological demands (and frankly outrageous claims) by students (and several of their ideologically-driven professors) at universities in America and elsewhere. As a result, I started to become more and more uneasy about (and mistrustful of) the state of the humanities, especially about what had been passing for the so-called ‘critical thinking’ in the humanities for decades. I decided that the book should therefore reflect my growing concerns about that form of ‘critical thinking’ and the harm it does the students subjected to it. I finally submitted the manuscript to the publisher in June, 2016.
Oddly enough, I now think looking back that if it hadn’t been for the break-in and subsequent delays, Posthumanism: A Guide for the Perplexed would have been a boring-by-the-numbers-academic-book.
I think the book may also hold the dubious distinction of being the first academic book to use the word ‘kek.’
Note: The following post is a taken from the Teaching page on my website. I’ve decided to post it as a blogpost because it introduces and pulls together some of the issues that I plan on discussing in upcoming blogposts.
As I’ve mentioned elsewhere on this site, I am a firm believer in maintaining a strong interrelationship between teaching, writing and research.
However, teaching in universities is all too often viewed by faculty as an inconvenience or a hassle. It’s surprising just how much complaining tenured faculty members do about having to teach. (Now, obviously, I don’t mean that all tenured faculty find teaching to be a chore, but faculty complaints about having to teach are certainly not uncommon.) In fact, a good way of measuring this is to look at how many tenured and tenure-track faculty have voted to give themselves reduced teaching loads (for example, going from a 3/3 or 2/3 per semester load to a 2/2, 2/1 or 1/1 load), on top of seeking out teaching releases, which allow them to avoid even more classtime. The recent attempts by Vassar College faculty to give themselves 2/2 loads offers useful insight into this issue, and useful context for the practices and attitudes with respect to faculty teaching loads can be found here. The latter link also delves into the issue of unproductive tenured faculty, a topic I’ll be exploring in a future blog post. Meanwhile, this is happening to their non tenure-track colleagues.
This, to me, is a terrible shame. Not only that, it is very shortsighted: the plain and simple fact is that, without students and the tuition they pay, there would be no faculty, no departments, no universities. Perhaps there’s a (albeit weak) case to be made for this bad attitude among faculty in the sciences where important, impactful work often takes place. But such an attitude to teaching is unbecoming in the humanities, where ideological posturing, outmoded sources and tedious antiquarianism–often spatchcocked with poorly grasped critical theory–masquerade as knowledge that is then doled out to students with little or no thought given to how such ‘material’ will allow them to engage, navigate and understand the modern world they inhabit.
However, this bad attitude is downright bewildering (and deeply ironic) when expressed by faculty in literary and cultural studies departments–or, at least by those who still do a modicum of research. Such individuals appear to view teaching actual real-world skills like proficiency with digital tools, research methodologies, composition and critical thinking, as somehow beneath them; it’s almost as if they can’t see how the university sees the humanities as a whole. And so, all that ‘unimportant’ stuff gets farmed out to underpaid contingent faculty, who are, in turn, looked down upon; and those lucky–yes, lucky, because ‘talent’ plays no role in getting an academic job–enough to have tenure can then teach the ‘super important’ stuff like ‘images of grief from the C16th.’ Surely, however, such high self-regard is unwarranted since data shows that some 82% of academic articles published in the humanities are never cited by their peers, never mind read by non-academics. But, one might argue, doesn’t the value of teaching and reading literature lie in expanding the social abilities and humanity of others? Nope.
So, while I’m pleased to note that all the articles I’ve published fall into the 18% of humanities studies that actually are cited, data like the above made me really start to re-evaulate what teaching in literature and the humanities is all about. Throughout my career, I’ve tried hard to distance myself from the aforementioned attitudes and blinkered viewpoints, which are certainly not uncommon. To that end, I’ve tried to connect my teaching to my research: all of my research publications, with the exception of my first book, which was my PhD dissertation, have arisen directly out of the various courses I’ve taught over the years at UBC.
Of late, however, even this practice no longer seems to me to be enough: indeed, I often worry that what passes for ‘critical thinking’ in literature and humanities departments amounts to an active de-skilling of students. In particular, I have begun to wonder if pedagogical theorizing by the likes of the Marxist educational theorist, Paulo Freire, as well as notions of a ‘hidden curriculum’ (the belief that ‘lessons’ teach not just specified content but also the ‘hidden’ transmission of norms, values, and beliefs), are largely to blame, since they both often produce what looks more like cognitive bias and conspiracy theory than actual critical thinking. This thinking transforms classrooms into ideological battlegrounds and students into activists, with little room for thought given to the stuff to be learnt. So, we have a not-so-merry-go-round, where ideologically-driven suspicions about lesson content create condescending half-baked theories about the unconscious–Freud preserve us!–transmission of politics in classrooms, which in turn create political counter-measures that import yet more politics into the classroom…
So, where might all this lead? It would appear that such an ideologically-driven approach to education is already bearing terrible fruit: witness, for example, the ‘demands’ made by students at Evergreen College in the US, which seem to have little to do with important or essential content and more and a lot to do with enforcing identity politics. Now, Evergreen College is an admittedly extreme example, but it is far from being the only recent example of what I would call ‘unreason’–literally, the privileging of ‘feels over reals’ in educational and learning settings. And, as this type of educational approach has become increasingly widespread, more and more humanities students have left or are leaving colleges and universities, buoyed-up with the kind of misplaced confidence that only ideology can give and believing they have a simple solution to all that ails the world. The reality, however, is that many students leave their humanities education with very few actual real-world skills and find that their degrees do not open doors to well-paying jobs. Now, let me be absolutely clear here: this situation is not the fault of the students; it is the fault of ideologically driven and out-of-touch professors who do each and every one of their students a massive disservice by turning them into textbook examples of the Dunning-Kruger Effect (see image above). The humanities is well on the way to irrelevancy, piloted into the ground by people who seem to think there is actually nothing wrong. And I understand that this is a bitter pill for humanities students to swallow, and it gives me no pleasure whatsoever to have to say it.
As it stands, these problems seem mostly confined to the humanities; however, the sciences should not get too smug. Lysenkoism has shown that science is not immune to ideological infestation, and there are worrying signs that ideology is once again making its way into the sciences, especially biology. Nevertheless, I still think the humanities can learn a great deal from how things are done in the sciences: think, for example, of the intellectual honesty and integrity that is fostered by the scientific method, a wonderful tool for rooting out cognitive and ideological biases; the sciences are also constantly engaging with and trying to understand reality, a thing that the humanities seems content to deny exists. And, perhaps, this denial explains the increasing out-of-touchness of the humanities: if you don’t believe reality exists except as a projection of your own phenomenological experience and/or as a discursive formation something created by language, then why bother trying to understand it? Strangely, a lack of belief in the reality of reality doesn’t appear to bother such deniers when they turn on their computers or need a medical diagnosis or want to fly to Paris.
My growing concerns about the intellectual dishonesty in the humanities, coupled with my increasing appreciation for science and its investigative methods, are the main drivers of the shift in my more recent research interests, pushing them towards an engagement with information, cybernetics and technoscientific advances. In particular, I am interested in questions about how such realities are changing humans into ‘posthumans’ and how those changes signal the need for a transformation of the humanities into the ‘posthumanities.’ In short, there must be a reconfiguration of the humanities that both understands and incorporates the science and technology that is changing C21st humans. As I argue in Posthumanism: A Guide for the Perplexed, if the humanities imagines that it does not need to understand advances in science and technology in their technical aspects and mathematical specificity, but treats them instead as opportunities for baseless speculation, reckless scaremongering or grinding tedious ideological axes, then why should anything the humanities has to say about such topics ever be taken seriously by those who do truly understand them? Afterall, what’s to be learned by listening to willful ignorance? And if, as I suspect, the gulf in the understanding of technoscientific topics that separates the science from the humanities is destined to ever-widen, then how can the humanities ever learn anything from the sciences? Can such a humanities claim to teach anything truly useful about the realities of the modern world to its students? Is this not to actively de-skill students?
The concerns and questions I’ve raised here form the foundations of one of my current research projects, a book on reason and unreason, as well as one of my upcoming classes at UBC.