Powered by Perlanet
I’m not talking about the mountain of grading waiting for me.
The bad news: I fear Jenny By-The-Front-Door may have died. It’s cold and windy out there! No sign of a body, either — if she’d fallen out, I’m pretty sure the corpse would have blown away. I poked into her nest fairly thoroughly, and …
There’s an egg sac deep in the middle! With the multiple hatched-out egg sacs around her, she clearly had a fecund life. I may have to bring her nest into the lab and examine it more closely.
Speaking of the lab, I got in this morning and found that one of the egg sacs there had hatched. Baby spiders galore! I quickly did a partial separation into groups and gave them a lot of flies to gnaw on, so they wouldn’t gnaw on each other. I’ll come in this weekend and move as many as I can into individual containers. There’ll be some attrition — about a half dozen escaped and ballooned off into the corners of the lab, or the atrium, or the crawl space, or other people’s labs, where I hope they prosper.
Jerry Fisher ought to change the lyrics a little.
And when I die, and when I’m gone
one childfive hundred children born
In this world to carry on, to carry on
Dang, that doesn’t fit. Well, Jerry Fisher is still alive, I’ll trust him to do a better job on the lyrics.
Emerson Thomas “Tom” McMullen has opinions about evolution. He is a Historian of Science, Technology, and Medicine, though, and his opinions are hosted on the official website of Georgia Southern University, so maybe we should take a look at them. He has a lot of them, and they all seem equally well-founded, so I’ll just peek in at one, his claim that common descent is not scientific. Here’s his short summary of his thesis:
While we see natural selection in nature, we do not observe descent from a common ancestor happening today. That fact, taken by itself, makes the idea unscientific. Nevertheless, the idea of descent from a common ancestor does make testable predictions. These are: 1. Over time, life changes significantly. 2. The change is from simple to complex. 3. The change is from one ancestor to diverse offspring. 4. The change involves many transitional forms/intermediates.
Right away, I’m stopped cold by the claim that
we do not observe descent from a common ancestor happening today. What a peculiar thing for a historian to say! Common descent is a historical process that occurred over billions of years, so of course it isn’t happening “today”. Similarly, the rise and fall of the Roman empire went on for over a thousand years; does the fact that we don’t see Romulus and Remus building a city, Augustus inheriting an empire, and the Byzantines falling to the Ottoman Turks today mean that none of it happened? This makes no sense. Just as the rest of his arguments make no sense.
So here we go.
1. Over time, life changes significantly: he claims this is false because…
Stomatolites [sic] were made by algae that were thought to be extinct. Then in the 1950s, a scientist found them alive at Shark Bay, Australia, where a high saline environment deters predators. These algae have remained unchanged over eons. They did not evolve. How about that? The oldest living beings we know about never changed!
He has a philosophy degree, but he doesn’t seem to understand that you can’t disprove a general, diverse phenomenon with a single example. What about all the organisms that did change? There weren’t any monkeys or spiders or dinosaurs in the Precambrian. There aren’t any dinosaurs in the Cenezoic. You don’t get to ignore all the significant changes to Earth’s biota to claim that one example means none of it occurred! Further, I’d add that superficial similarities don’t mean that modern stromatolites are genetically identical to ancient ones.
His next argument is to say that evolution claims
2. The change is from simple to complex. This isn’t true! Evolution makes no such claim, so it is a false criticism.
All the Cambrian fossils abruptly appeared, complex and fully adapted to their environment. This is the anomalocaris, which can grow up the six feet long. One of the animals it eats are trilobites. The authors of The Fossils of the Burgess Shale (Briggs, et al.) remind us that “the appearance of diverse shelly fossils near the base of the Cambrian remains abrupt and not simply an artifact of inadequate preservation.” Obviously, this complexity is not predicted by descent from a common ancestor, which says life began simple and became more complex.
Except…no. What he slides right over is that the Cambrian was about a half billion years ago, with 3½ billion years of evolution before it. Living organisms were complex before multicellularity and hard parts evolved, and this was a transition in response to a changing environment, with phenomena such as bioturbation and increasing atmospheric oxygen. Furthermore, the Cambrian wasn’t an instantaneous event — we’re talking about ten million years of change, at least.
You could argue that the evolution of the first cell was an example of increasing complexity, and I’d agree. However, that complexity arose rapidly, and what’s been happening over the last few billion years is an exercise in permutations.
Next is an odd one,
3. The change is from one ancestor to diverse offspring. He doesn’t think the fossil record illustrates a long history of diversity.
I have seen biologists write that evolution explains diversity, but the evidence from the fossil record is just the opposite. As mentioned earlier, during the “Cambrian explosion of life” many different animals, like trilobites, abruptly appeared with no predecessors. The late Stephen J. Gould wrote a popular book, Wonderful Life, on the diversity of Cambrian fossils in the Burgess Shale. Gould points out that these Cambrian fossils include “a range of disparity in anatomical design never again equaled, and not matched today by all the creatures in the world’s oceans.”
That’s a new one. So, the fact that biologists have described spectacular examples of biological diversity, and that far more diverse forms have existed than are now extant, is evidence that evolution doesn’t produce diversity. He’s putting biologists in the untenable position of every example of diverse, new forms is, to his mind, an illustration that diversity did not and never existed.
So now let’s lapse into foolish familiarity with
4. The change involves many transitional forms/intermediates. Oh, no, the
no transitional forms argument!
In his [Darwin’s] Origin he asks: “Why then is not every geological formation and every strata full of intermediate links?”(p.280) He answers that the geological record is incomplete. But that was nearly 150 years ago. We have found billions of fossils all over the world since then. The prediction of innumerable transitional forms falls flat on its face, and, from a philosophy of science standpoint, the idea of descent from a common ancestor is falsified.
Finding lots of fossils does not refute the idea that the fossil record is incomplete, and Darwin’s original explanation is still entirely correct. For instance, Stegosaurus species span something on the order of 10 million years in the late Jurassic, and there had to have been billions of them living over that time. We have about 80 fossils. If we doubled that number, would we have a complete fossil record of the genus?
Like so many of Dr McMullen’s arguments, they fall apart into a rubble of innumeracy, illogic, and ignorance. It’s curious that he became an emeritus professor at Georgia Southern, and they let him teach courses on his version of “science”, and that he’s got all this bogus crap on a university website.
This is the price of academic freedom, I guess. I don’t understand how he got past a hiring committee, though — how did a history department end up employing someone who doesn’t understand history? There’s a story there, but since it isn’t happening today we obviously are unable to examine it, and like all of history, only happened in the fleeting moments when we open our morning newspaper.
Oops. I have to apologize to Floridians. I just accused them of being a collection of stupid, purblind fools who are following assholes into a watery oblivion. It’s not that that isn’t true, but that it’s also true of every state in the Union.
The truth about Trump has become a little bit too obvious. It’s always been obvious, but now it’s accompanied by a marching band with banners flying and megaphones howling it out.
All that, and Congress hasn’t dispatched a police detail to arrest him for gross incompetence and greed, to get him out of office before he does even more harm. And people still go to his rallies and cheer for him.
Fuck, we are so fucking fucked.
Finally, Floridians are talking about climate change. It’s a strange situation where the American state with the most obvious risk from rising sea levels has been in total denial. Right now people are noticing that “King tides and sunny-day flooding are disrupting postal delivery in many communities, eroding utility boxes, requiring law enforcement to manage traffic corridors where flooding has closed roads”, and yet, they keep electing Republicans who turn a blind eye to everything.
“There hasn’t been a lot of conversation about this. I understand that, and I understand why,’’ he continued, leaving unsaid that the words “climate change” were banned from the lexicon for much of the eight-year tenure of former Gov. Rick Scott, and the state’s response to it was not considered a priority.
But Lee, who served in the Senate for the last six years of Scott’s term, said he believes there has been “a paradigm shift” with Republican Gov. Ron DeSantis — who followed the lead of local governments in Florida and appointed a “chief resilience officer” to start talking about the effects of global warming on the state.
The new landscape comes with new political realities, Lee said. “There’s a younger generation of conservatives in this state that aren’t as much in denial.”
“The world is changing and so is the leadership in state government,’’ he said. But he stopped short of saying the Republican governor and the GOP leadership of the House and Senate, as well as the development, utility and insurance industries that finance them, will support the “paradigm shift.”
It’s astonishing that the governor essentially banned a scientific conclusion from any discussion, especially when the fact of climate change is going to hit the state so hard. Pardon me if I’m not impressed with a new set of Republicans who “aren’t as much in denial”; they’re still refusing to address the problem. Read the whole article; they’re patting themselves on the back for thinking they might just get around to talking about it and maybe passing some legislation (no promises, though!), yet there are all these conservative blowhards making excuses for not doing anything by blaming China and India.
All I can say is…
It’s going to be hard to muster any sympathy for Florida when the next hurricane hits or a major city has to be abandoned when they keep electing these idiots.
Can we give it back to the Seminoles before it gets worse? You know, to some people who might take the responsibilities of their home seriously.
This seems about right for America: an Iowa family’s basement fills with blood. Real, genuine blood. The stuff had backed up from a slaughter house next door.*
Kaitlin Dahl said the company uses a catch drain to capture most of the blood during the butchering process. That blood is emptied into an offal barrel and taken away by a rendering truck.
“When you split a bone in half, there will be some excess blood that will drip on that floor,” she said. “That was allowed by the county to go down the back drain.”
Did I say good news? I meant slightly less horrible news than what’s going on in the regular news.
*Remember the real estate mantra: “Location, location, location.”
This article is a preview from the Autumn 2019 edition of New Humanist
A History of the Bible: The Book and its Faiths (Allen Lane) by John Barton
Even a reader familiar with the fraught and bloodstained history of the Bible will come away from John Barton’s sizeable new study having learned a great deal. Halfway through, for example, he tells us that the word “Bible” is in origin a plural word (ta biblia in Greek means “the books”) but that by the end of the third century CE the books were being treated as a unified whole. Packed into this are many of the ingredients – linguistics, history, the challenges of translation – that make A History of the Bible so readable.
The ubiquitous and enduring influence of the Bible needs no elaboration. As Barton points out, Oxford University Press alone – one publisher in one country – sells 250,000 copies of the King James Version every year. But the very fact that we continue to visualise it as one book belies the complexity and multiplicity of its very nature. The Bible, Barton explains over 489 illuminating pages, is never one thing.
We are taken, first, on a pleasingly logical tour through the chronology and geography of the Old Testament, beginning in Israel and Palestine in the eighth century BCE. The author’s asides about the “priestly style” of the early books are occasionally a delight; Barton devotes half a page to a passage in which the same point about clouds is made nine times, before he says, “We may feel that we had got the point somewhat before the end of this passage.”
He is keen to underline the differences between the Old and New Testaments, sections of the Bible that were at one point so conflicting that Marcion, a second-century teacher, proposed with short-term success that references to the Old be expunged from the New. “The Old Testament is the literature of a nation, written over some centuries, and having a certain official character,” Barton writes. The New Testament’s literature was written in far less time, against a backdrop of persecution, and by a small group “distributed all over the eastern Mediterranean world”.
The New Testament began as ad-hoc literature – the Gospels, for example, were never cited with the formula “as it is written” – and Barton argues that the Church only perceived them as having comparable authority to the Old Testament from the second century CE onwards. The first sustained official rulings on the complete contents of the New Testament came two centuries after this. As elsewhere, Barton is honest about the limits of peering back through history: “When we have established the oldest reading available to us, we should not delude ourselves that we have therefore got back to the words Jesus uttered.”
Taking the reader through the Lutheran Reformation of the 16th century, Barton explains that its seeds had already been sown: there had been excoriating criticism of Church authorities centuries before. He also describes the innovative thinking of Spinoza, who in the 17th century took the bold steps of treating the Bible “like any other book” and appreciating that those who lived in biblical times might have thought differently to us. In a history full of splitting hairs and infuriatingly pointless in-fighting, Spinoza is, for my money, the book’s hero.
Barton is an obviously learned and eloquent writer, and I disagree with him only on a few points: once when he says that atheist critics haven’t pounced on the notion of the official accounts of Jesus’ life as being contradictory (we certainly have); and again when he says of the story of creation given by the author of Genesis 1-2: “There is bound to be some level at which what he wrote is true.” (Why is there?) Elsewhere, Barton, an Anglican, is refreshingly objective in his biblical interpretation; here, it is as though he has become light-headed. Barton does not follow the technique he describes: to read the text at face value, “and then recognise that it is not true”.
But virtually every page of the book is infused with an intelligence that refuses to perceive one translation as perfect or another critic as unbiased. It is a study that does justice to its colossal subject.
I recently joined Jim Al-Khalili on BBC Radio 4’s The Life Scientific to chat about my work. I have known Jim for many years and so it was lovely to talk about my thoughts on magic, lying and luck. The talk was recorded at the Edinburgh Fringe Festival, and managed to get quite a bit of attention online. I hope that you enjoy it!
You can listen to the interview here.
This article is a preview from the Autumn 2019 edition of New Humanist
We are living through something unprecedented. An open-ended social experiment, funded by venture capital, supported by elements of the US military and security state. An industrialised system of writing. We’re writing more than ever before in human history. This is the basis for the world’s most profitable industry: the social industry.
The social industry was supposed to be a source of democratic empowerment. If we could all self-publish, each find our own audience, then the old hierarchies would be challenged. Industrial media giants would no longer enjoy ideological monopolies. State secrecy would be weakened. Networked crowds would easily outflank immobile, centralised forms of power. Even celebrity would be democratised. Anyone with a social media account has a public image and a public-relations strategy.
Yet how quickly cyber-utopianism became cyber-cynicism. At one stage, the tech giants gloried in their association with democratic uprisings in Tunisia and Egypt. Today, social media is far better known for incubating gleeful trolling sociopathy, “fake news”, misogyny and sadistic personal attacks. By 2017, a fifth of Twitter’s total value was reportedly down to Trump and his tweetstorms. From the hyped “Twitter Revolutions” to the first, equally hyped, “Twitter President”, it has been a hell of a comedown. In the ensuing tech-lash, the things we have learned about social media have been increasingly worrying, not just politically, but for users themselves. Study after study links the industry to increased depression, self-harm and suicide.
And why should we expect anything else? The social industry was not invented to free us, but to capture social life and turn it to profit. When Theodor Adorno wrote of the “culture industry”, arguing that culture was being universally commodified and homogenised, it was a vivid exaggeration. The social industry has gone much farther. It actually subjects social life to an invariant script, a written formula designed to foster and control user engagement. And the way that formula models social life reflects, as academic Alice Marwick’s research shows, the class outlook of wealthy men in northern California: competitive, hierarchical and status-hungry. It is no coincidence that life on social media, driven by a struggle for scarce attention, so often devolves into a war of all against all.
The social industry is a chronophage: it eats time. If life is defined by what we attend to then attention economies quantify life as raw material. That material is subject to absolute scarcity. It’s a sociological truism that people feel more pressed for time, more hurried, even where average working hours haven’t changed. This is partly because smartphones mean work penetrates easily into leisure hours. But the average global internet user now spends 135 minutes per day working on the social media platforms, perhaps more time than is spent meeting friends. Over a whole life, this would amount to 50,000 hours of work for the social industry. We might ask the minimal utopian question: what else could we be doing with that time?
The lure is that we can write whatever we like to anyone we like: friends, celebrities, jihadists, porn stars, politicians. We can find friends, build careers, pursue political agendas. But in the new economy of writing, we are no more in control than the Luddites were in control of the machines they smashed. We have access only on condition that we work, by feeding the machine with data about ourselves. The more we engage, the better it knows us, and the more accurately it can goad us into engaging more. Everything we see in our feeds is a somatic barrage of information designed to keep us working.
This is a form of power never seen before, a technopolitical regime that defies our customary ways of explaining the world. It is a capitalist industry, combining production and consumption in a single flow, but it doesn’t offer a single product. At present, user experience is redolent of 24-hour news, the stock market, reality television and Neighbourhood Watch: but that will evolve. It enjoys the surveillance power of states, the ideological power of the mass media, and the commercial power of business empires. It bisects business and politics.
We, as users, are also in a new situation. We are neither consumers nor voters. We are unwaged labourers: digital “serfs”, as computer philosophy writer Jaron Lanier puts it. We don’t think about the work because we are “users”, much as heroin addicts are users. The guilty confessions of former social media bosses, from Sean Parker to Chamath Palihapitiya, confirm that the industry models its practices on addiction. The “like” button is cybercrack, a little hit of social validation. But storms of disapproval are also part of a volatile system of “variable rewards”. Like the behaviour of a mercurial lover, they keep us needy and guessing. And we are laboratory subjects. The platforms of the social industry are designed, like psychologist B. F. Skinner’s “operant conditioning chamber”, to control behaviour with rewards and punishments. As users, we submit to constant real-time surveillance and manipulation, the fruits of which can be put to work for advertising, academic research, electoral campaigns, or cyberwar.
Even as old powers like print media are disrupted by this, a new technopolitical regime is being created. This is exactly what traditional Washington, especially the Clinton and Obama administrations, hoped for: that the internet would modernise American capitalism and further globalise its power. Yet it also made trouble for the old Washington centre. It has created new vectors of cyberwar in which geopolitical opponents, hacktivists and jihadists have landed blows on Washington. It has degenerated the already deteriorating information ecologies on which democratic legitimacy depends. Whether or not “fake news” can be blamed for Trump’s success, its proliferation tells us something important about the social industry. The way it uses information doesn’t select for accuracy, but for impact: whatever keeps us attentive and industrious.
The subtle power of this industry is greater than any press empire. Not only are our devices individual; so are our data profiles. Whoever reaches us through social media is like a disembodied voice speaking directly into our ear. What distinguishes the social industry from ideologically driven print media is that it is, in principle, content-agnostic. It is gluttonous for all content, the technology ready to instantly commodify even the dark sides of social life. Livestreamed murders, rapes and suicides will be removed, but not before they have spawned surges of monetisable attention in the news media. Mark Zuckerberg quickly withdrew his affirmation that he had no problem with Holocaust denial on his platform, but he was telling an important truth.
So why, then, have the far right done so well out of the social industry? If Trump is a moneyspinner for Twitter, YouTube is the new talk radio, having “red-pilled’ many activists and generated lucrative microcelebrity economies on the far right. Tommy Robinson and Alex Jones, until recently, “influenced” like beauty bloggers and cashed in accordingly. And the more success they enjoyed, the more valuable they were to the platforms.
Perhaps the key to the far right’s success is that the social industry has fused entertainment and politics more efficiently than any previous system. The technologies facilitate this cultural shift. The academic Zeynep Tufekci found that the far right benefits from the algorithms designed to keep people watching. The “up next” system guides users toward more “extreme content” – male rage, conspiracy theory, Holocaust revisionism. Here, “extreme content” is akin to “extreme sport”: an illicit thrill, delivered automatically and intimately, by a machine that knows us better than we know ourselves. It is not that YouTube prefers far-right content, any more than Google is partial to the “false flag” theories that its algorithms help promote. It is simply that infotainment is an efficient, low-cost means of orchestrating attention.
The social industry also thrives on the sort of volatile culture war that is congenial to reaction. Much is said about “identity politics” on social media, but the industry has its own internal identity politics. Everyone who engages has an identity, a self-image, on which they constantly labour. That involves getting caught up in constant surges of aggregated sentiment and attention. There is always a new enemy to berate, a new outrage to gyrate over. Through these outrages, communities are formed, usually in antagonism to others. Cultural differences become ossified, more like borders than weather fronts. In these storms, through which major cultural and social changes are filtered, the modern “alt-right” has congealed. It was #gamergate that catalysed a toxic stew of chauvinism, fear and resentment into the Men’s Rights Activists (MRA) movement. Just as it was #birthergate that catapulted Trump to the top.
While accelerating a crisis of knowledge, the industry makes cultural openings for a new form of fascist infotainment. Far-right clickbait insinuates itself into the gaps created by the economic crisis of journalism. Charismatic racist demagogues thrive on the social industry, while elected politicians are desublimated, showing themselves to be as petty and belligerent as everyone else. Conspiracist infotainment thrives on paranoia, which, amid a general breakdown in trust, is radicalised in the social industry: you never know if your interlocutor is a troll or a Russian sockpuppet. What we call conspiracy theory is often an attempt by citizens with few resources to work out, through ad hoc investigatory committees, what is real.
Spitting from the social industry’s culture wars and conspiracist panics, like sparks from a furnace, are acts of senseless violence, from the MRA killer Alek Minassian in Toronto to the Islamophobe Darren Osborne in London. We have seen conspiracist vigilantes, like the heavily armed Edgar Maddison Welch, investigate “fake news” panics at gunpoint. We have seen trolling fascist “anons” from gamer message boards, like the killers at the Christchurch Mosque or the Poway Synagogue, train their memed, ironised cruelty on flesh and blood.
It may appear somewhat moralising, even scapegoating, to link the social industry to such violence. The “lone wolf” is not a recent invention. Yet we don’t in other cases expect internet tempests to remain online. The swarm logic of the social industry, its surges of attention and its method of aggregating crowds based on a momentary sentiment, have helped build political street movements. The gilets jaunes protest, however politically complex, is nothing if not the meatspace manifestation of an online shitstorm. Swarm logic has also been orchestrated to deliver political shocks, as with the Trump and Bolsonaro election campaigns, the Five Star Movement’s breakthrough in Italy, or the Brexit Party’s raid on Britain’s stalemated parliamentary system. It is therefore only natural, even mundane, to expect that a certain share of violently misogynistic MRA trolls and Crusader-fetishising cyber-Islamophobes incubated in the online culture wars will become violent. These are individual murderers. It can only be a matter of time before the shitstorm itself takes up arms.
The vogueish term for internet-inspired violence is “stochastic terror”. This treats online propaganda as being functionally equivalent to advertising. Like advertising, online exhortations against women or Muslims have a conditioning effect that, while individually unpredictable, can be predicted over a whole population. In any given population of Tommy Robinson fans, there is a probability of a certain number of Darren Osbornes. This sounds neat and tidy, but it doesn’t actually explain anything. Why, for example, does racist, misogynist or fascist propaganda have an audience in the first place? Why does some content produce violence, even without overtly calling for it, while other content doesn’t? Above all, the term doesn’t explain anything about the role of the medium.
The social industry is a cultural centrifuge, an accelerator, and an engine of new forms of politicisation. It plays a role in politicising the mental distress of those (typically white and male) users who find in the “red pill” an unbeatable antidepressant. It algorithmically connects the propaganda to its audiences, data point to data point, as a matter of course. The question is whether it also helps produce the kind of user who would be available for political violence.
A surprising answer to this puzzle is furnished by J. M. Berger, a security intellectual and expert on the Islamic State. In an early analysis of ISIS’s social media strategies, Berger described how the group turned Twitter into a “carrier wave for millenarian contagion”. As an apocalyptic movement, ISIS needed to cultivate among its followers the animating apprehension of end times. It was not ISIS’s style to emphasise eschatological complexities, however. Rather, it sought to summon a community into existence, which would vividly experience this “apocalyptic time”. Apocalyptic time is characterised by a sense of temporal acceleration, social contagion and subjective immersion. Freak events appear to come, relentlessly and miraculously, from “nowhere”. “Everyone” seems to be gripped in the fever. Ordinary, secular experience drops out of sight. In apocalyptic time, the normal rules don’t apply. Extraordinary, violent action becomes thinkable.
This would have been the experience of ISIS followers on Twitter. During every jihadist advance they were blitzed with accelerated postings, news of spectacular, bold breakthroughs. They were drawn into communities discovered by cyber-jihadist tactics of memeing and hashtag-jacking. A flurry of content – first-person shooter footage, video games, battle segments, executions, and the iconic black flag fluttering over Arcadian scenes of peace and plenty – absorbed them into the apocalyptic dreamworld. Yet ISIS accounts were simply regimenting the ordinary experience of being on Twitter, the better to recruit a mobile theocratic army. They are only the most organised millenarian killers. Recent “lone wolves” incubated in the social industry are notable for the fact that their atrocities are intended as interventions in a perceived coming reckoning, whether it is #whitegenocide or the “incel uprising”. And the killers intend their acts to be final, expecting or courting death.
“Apocalyptic time” is the time of the social industry. Life on the platforms is continuously accelerating towards the latest climax, the latest showdown, new shocks that might engender unworldly confidence in miraculous possibility. Social contagion, and immersion in the feverish excitement of lifeworlds extruded from the secular order, is not a tactical invention of jihadists. It is the sum total of “trending topics”. It is the stuff of banality – but then, as philosopher Maurice Blanchot said, “the Apocalypse is disappointing.”
There is a pointed irony here. The social industry evolved from Pentagon conservatism and Silicon Valley libertarianism. It was guided to dominance by Washington liberalism, and heralded by the emancipatory politics of Occupy and the Arab Spring. Its offer is liberatory: in exchange for data, it promises new forms of participation, new ways of speaking up. And yet, as Philip Pullman would tell us, every tool has intentions of which the user knows nothing. What if the industry has instead given us new forms of murderous reaction? What if it has helped birth the fascist potential of the 21st century?
Richard Seymour is the author of "The Twittering Machine" (Indigo Press), published on 19 August
Humans have the capacity to imagine - to see that which is not there. In his book "Out of Our Minds: What We Think and How We Came to Think It" (Oneworld), Felipe Fernández-Armesto argues that it is this startling ability that has fuelled human development and innovation through the centuries. The book takes in science, politics, religion, culture, philosophy and history, to examine imaginative leaps from the first Homo sapeins to the present day. Here, Fernández-Armesto - author of many books and currently William P. Reynolds Chair in the history department at the University of Notre Dame - discusses his arguments.
What brought you to this subject matter?
For historians, the big question is, "why do we have history at all?" Why are humans the only cultural species to experiences the rapid, convulsive changes we call "history"? Traditional answers appeal to some weird and improbable force that’s external to history: providence or progress or scientific laws. I’ve given a lot of attention in past work of my own to evolution, environment, and energy as drivers of historical change. But I give the biggest, most conspicuous, and most astonishing role to ideas. We are culturally mutable creatures because we keep thinking of new ways of regulating our lives, renewing our relationships, managing our environments – in short, seeing the world differently from the way it is and labouring to re-craft it to match our ideas of how we’d like it to be.
How do you define the human imagination?
It’s the power of seeing what isn’t there. It’s not exclusively human, but it’s a faculty enormously bigger in humans than in any other species we know of. It’s compounded chiefly of two evolved faculties: first, anticipation - the ability to see what isn’t there yet - which favours survival because it enables predators and prey to guess what might be round the next bend or over the next crest. Humans have a lot of it to make up for the deficiencies of our hunting and predator-avoiding equipment – such as speed, agility, talons, and fangs – compared with rival species. The second vital ingredient is memory – the power of seeing what isn’t there any longer. Paradoxically, it helps to have a bad memory, because an experience misremembered becomes a new idea. Humans usually congratulate themselves on superior mental faculties. However, our memories are, as common experience informs us, deceitfully unreliable, and (typically) measurably worse, in some quantifiable tests, than those of non-human apes. In partial consequence, thanks to a combination of good anticipation and bad memory, we have relatively rich imaginations, which generate world-changing ideas.
How do you define ideas, and where did you draw the parameters for the ideas you examined here?
For purposes of the book, I stick to merely mental facts, and focus on thoughts with world-changing power – the new ideas that drive other changes. I don’t try to catalogue all potentially world-changing thoughts but select – and try to trace to their origins, however remote – ideas that are still around today, making and re-making our world.
You write: “Probably no more than a dozen subsequent ideas compare, in their power to change the world, with those of the six centuries or so before the death of Christ." Could you expand?
I can go further and say that the most stunning and most influential ideas pre-date the “age of sages”. The origins, for instance, of the idea that there are realities undetectable by sense-perception are untrackably ancient. To what genius did that inexplicably strange yet vastly creative notion first occur? Think, next, of infinity or eternity – ideas too huge to be products of experience. Or my favourite idea: that of nothing. It’s so elusive that as soon as you think of it, it ceases to be itself and becomes something. It’s literally beyond experience. Yet it occurred to profound thinkers among our remote ancestors. Obviously, in specifying the six centuries or so before Christ, I was thinking of the Hundred Schools of China, classical Greece, and the worlds of the Jewish and Christian scriptures – where great sages ran out the grooves in which our thinking is still largely confined. Aristotle’s an irresistible example. I tell the story of how Walter Guthrie was amazed as a boy by how “modern” the great philosopher seemed. Only in manhood did Guthrie realise that it wasn’t that Aristotle was modern, but that moderns are Aristotelian: we still broadly rely on his descriptions of how to tell truths from falsehoods.
What are the origins of modern science?
Depends what you mean, but if what we usually call the “scientific revolution” is in your mind – the early-modern paradigm-shift that put observation and experiment at the top of the elite’s scale of truth-values – I’d pick out four key influences: first, magic, because magic and science overlap; both are attempts to control (and therefore to understand) nature. So alchemy becomes chemistry, astrology astronomy, quackery medicine. Then I’d cite the changing social context of the time in Europe, when science became a suitable occupation for gentlemen no longer obliged to prioritise war. The result was a tremendous release of talent and patronage into scientific endeavour, which, previously, had been left, on the whole, to clerics and artisans. Firepower in war had a lot to do with the switch of the knightly class into peaceful and productive work, as battlefields could be largely consigned, more cheaply, to fire-armed hoi polloi. Then, crucially, the global interactions that ensued from long-range commerce and imperialism enriched learning, especially in Europe, where most world-girdling voyages began and ended. Collectors’ Wunderkammern were proto-museums in which samples and specimens from all over the world became available for study. Explorers’ reports of previously unknown environments and life-forms made scholars re-write the encyclopaedia inherited from antiquity. Finally, Christianity helped nurture science by stimulating a search for divine order in an apparently chaotic world and, more generally, by exalting the study of nature as God’s work. People prate about Galileo, without realising how his religious convictions emboldened his stance, or how important church people – especially Jesuit educators – were in spreading his ideas. Far from being a benighted influence, Catholicism has venerated science as a gift of God. As a famous priest who formerly led my university used to say, “If there’s a conflict between religion and science, there’s something wrong with the religion, or the science, or both.”
Of course, you can take a different approach and ask, “Who first privileged empiricism”? That question, I think, will lead you to early Taoists, scanning earth and sky from their sacred watchtowers. If you worship Nature, you have a very good reason for observing it accurately.
You write: “If I had my way, we would drop the word ‘Renaissance’ from our historical lexicon." Why?
It dates from a time before historians realised that classical antiquity didn’t have to be reborn: people in Europe always looked back to it with reverence, as did those in other parts of the world to supposed golden ages of their own. What we call the Renaissance in the West was, rather, one in a series of episodes – from the “5th-century Renaissance” on through the Northumbrian, Carolingian, Ottonian and subsequent Renaissances – of accelerated interest in Greek and Roman ways of doing and thinking. I think the really big new thoughts of the early modern period - what Michelet called “the discovery of the world and of man” came at least as much from looking out to the rest of the world as from looking back into Europe’s past. I might mention, for instance, the new economic thinking of the School of Salamanca, or new developments in the scope of international law, or the new understanding of sovereignty, or Las Casas’s claim of the planet-wide unity of humankind, or the challenge to Christian complacency from interactions with Brahmins, Mandarins, and their ilk.
How does racism fit into this view of history as being driven by ideas?
Racism is an excellent example of one of the themes of my book – the power of bad ideas, as well as or rather than good ones. I define racism as the doctrine that one person is superior to another solely by virtue of membership in a group identifiable by supposed or real inherited physical or moral characteristics. It’s nonsense. No objective test can justify it. But from the late 18th century, until well into the 20th, a lot of apparently respectable scientific evidence supported it, especially in serology, craniology, and what at the time was called anthropology. In some reputable opinions, the theory of evolution provided a strong theoretical framework in which racism seemed to make sense, as white people established ascendancy in the “struggle for survival.” Scientists’ obsession with classification and with ranking “higher” and “lower” forms of everything helped. Of course, there were and are lots of non-intellectual reasons for being a racist – the self-interest of dominant elites or embattled communities, the fear of alterity, the abuse of racism in defence of cultural traditions or economic interests, the imperial agenda of the white man’s burden. But Out of Our Minds is strictly about the intellectual context, without which we can’t fully understand the phenomenon, or the scale of its menace.
This article is a preview from the Autumn 2019 edition of New Humanist
The emotional resonance of old snapshots is hard to beat. They pack a Proustian punch, adding texture to memories that fade over the decades; not just of that joke Grandpa used to tell at Christmas, but of the pattern on his thick knitted pullover; or such-and-such a friend whom we lost contact with, but who, in the moment, was a key part of the holiday party. Trapped on little slips of 4x6 photographic paper, they even begin to become the memories themselves. We can remember that beach we sat on, but nothing of the week before or after.
What’s almost as powerful is how restricted those resonances are to our own experience. We can well up with tears at an old, curling photo of a family picnic three decades ago, but there’s little more unbearable than being forced to look through the photo albums of others – even if compulsively flicking through an acquaintance’s Facebook album has become a very modern form of procrastination. To capture a social life and have it resonate with complete strangers, decades into the future, takes a unique skill; an eye for the universal, for fragments of tenderness and euphoria. It also helps, if you’re that sort of photographer, to have interesting friends.
Nan Goldin has all of these things. Her photographs of her varied social scenes in New York in the 1970s and 80s have become iconic representations of the time, and with good reason. It’s not just that many of them were famous or infamous in their own right – musicians, artists and filmmakers like Richard Hell, Cookie Mueller and John Waters, or even Keith Haring and Andy Warhol. She also managed to capture the strange combination of excitement and risk alongside the poverty and drug addiction of downtown Manhattan. Goldin did so with a sleight of hand that gave the images emotional potency, while conferring on the subjects a respect and dignity that most contemporary representations of the scene denied them.
Some of the most powerful of these images are collected together in her film The Ballad of Sexual Dependency, currently on show in a special exhibition at the Tate Modern. Featuring almost 700 images taken by Goldin over the preceding decade, the work was first shown in the mid-1980s in the form of a slideshow, accompanied by a soundtrack of songs whose lyrics relate literally to the images on display. Goldin organised the slideshow around a selection of basic themes: kisses, for example, or nude portraits of women, or drug use (particularly heroin), domestic violence and sex. Most but not all of the photographs feature people. They are captured in moments of high excess now long extinguished, and sit on the screen in the darkened room where we, the audience, gaze back, nostalgic for a moment most of us never experienced. Then they’re gone, and another image on the same theme appears. We drink it in.
Goldin’s magic touch is the same thing that distinguishes her from so many other photographers who aim to document a moment or a scene. Sitting in the dark with the other audience members, I thought back to the traditional image of the social photographer as a silent observer, moving through a city recording the action around them almost unnoticed. It was based on the ease with which the photographer could capture a scene; a quick snapshot that left the subjects undisturbed, maybe even oblivious, before they snuck back to their darkroom or newspaper offices.
Goldin, on the contrary, did no such thing. Her photos are so lucid because she rejected the division between observer and participant. These were her friends and this was her life, and as a result they betray an unusual depth and intimacy, capturing moments that an outsider would be unlikely to get access to. Goldin’s own biography is well documented, including her drug use, sex life and her experience with grief and trauma following her sister’s suicide as a teenager. Goldin is frequently the subject of her own lens in the Ballad; living in an abusive relationship, she documented her injuries resulting from her boyfriend’s violence. But those photos hold no superior placement over the injuries and excesses of others; each get their moment, whether joyous or painful. She is one of her own friends.
In part, Goldin’s access relied on the fact that her art was produced not just from but for her community, emerging from the punk DIY ethos of the cultural scene at the time. The Ballad of Sexual Dependency was originally screened as a slideshow for her friends and associates, in a double-bill with a film by John Waters, the “Prince of Puke” director whose homemade films starring his friends (including Cookie Mueller) were to make him a cult icon.
A poster for the original screening accompanies the film in the Tate exhibition. The title itself was drawn from The Threepenny Opera, Bertolt Brecht’s musical about Victorian London’s demi-monde, written at the height of Weimar Germany’s own subversive cultural boom. Like Weimar culture 1980s New York has become a byword for a certain type of sleazy cool, and that’s where the unsettling dynamic emerges in watching the Ballad today. It is all too easy to consume as glamour, if you don’t have your critical wits about you. Goldin herself works against this tendency – at the end of the film is a dedication to those featured who have since died, and it’s long, far too long – but the tendency is there nonetheless, erasing in your mind the suffering attached to addiction, overdose and the emergent Aids crisis that haunts every photo. It’s a function of the voyeurism implicit in viewership.
Goldin hasn’t forgotten that link. Today she is a powerful moral voice in the art world, most recently since founding Prescription Addiction Intervention Now (PAIN), an advocacy group aiming to divest art institutions from their funding by the Sackler Trust. This is due to its links with Purdue Pharma, the pharmaceuticals company whose production and marketing of the painkiller OxyContin is regarded as a key driver in the US opioid crisis. Goldin herself became addicted to OxyContin in 2014 after it was prescribed to her for tendonitis. The campaign is already producing successes. The National Portrait Gallery refused a £1m donation from the Sackler Trust after Goldin threatened to pull out of a career retrospective at the gallery, and the Tate have followed suit. Deaths from drug overdoses have doubled in the last decade, and prescription and synthetic opioids are to blame for the vast majority of those cases.
What might representations of the opioid crisis look like today? The issue is simultaneously a moral, political and formal one – as it always has been. It is too easy to “blame” artists for “making addiction sexy”. Bill Clinton tried as much when he attacked “heroin chic” in fashion photography in the late 1990s, a trend which saw artists like Goldin and Larry Clark as forebears. “The glorification of heroin is not creative; it’s destructive. It’s not beautiful; it’s ugly,” he said, shifting responsibility for that particular crisis from the US government to jeans manufacturers.
But it is true that the snapshot photograph of drug addiction carries with it the baggage of urban glamour and fashion. Today opioid addiction finds its most prominent cultural representation in music. Cloud rappers such as Lil Peep and 6ix9ine, for example, regularly reference opioid use and addiction, and sometimes die from it. Yet the crisis, stemming from the overprescription of “legitimate” painkillers, hits both Midwestern and middle-class America just as strongly. According to the American Farm Bureau Federation, almost three-quarters of US farmers have been directly affected by opioid addiction. While that side is beginning to find voice in country music, with artists like Angaleena Presley and Brandy Clark releasing tracks about painkiller addictions, there remains a crisis of wider cultural representation.
It is unlikely the snapshot photograph will ever play a role in that. Thanks to new technologies, our relationship with photography has fundamentally and irreversibly changed. While photo-sharing apps like Instagram marketed themselves on their reproduction of Goldin’s snapshot aesthetic – making retro visual effects and “errors” like light bleeds and old film stocks as easy as selecting a filter – they have fundamentally altered the social context of such photography.
The idea of a photograph being a “snapshot” of a moment has passed. The cost restriction of taking photos has disappeared with camera phones, and taking and editing multiple photos has become an everyday norm. With that, people are increasingly savvy and even paranoid about their own image, aware of how quickly and malignantly a photo can spread online without their consent. This, in many ways, has democratised a form of self-representation but removed its casual nature: photos are more stylised and posed than ever. To take out a camera at a party or a bar, or during sex, means a very different thing today than it did in the late 1970s. The snapshot is dying, and with it we are losing the intimacy and tenderness of those representations, and the emotional, historical and political power they might hold.
Nan Goldin’s “The Ballad of Sexual Dependency” is at Tate Modern, London, until 27 October
This article is a preview from the Autumn 2019 edition of New Humanist
For me, it all began with Lucy Liu, standing in the doorway of a fancy Manhattan office late at night, demanding that her exhausted assistant order her “that thing that I like, from that place with the gay waiter, the closeted one”, before disappearing back to her desk with a perfectly executed ponytail swish. Ninety seconds into the 2018 Netflix original film Set It Up, this moment established Liu’s character as an overweening, overworked boss with a secretly tragic personal life and a gift for fast-paced repartee – in other words, a romantic comedy heroine in dire need of a hero.
The film was released onto the subscription streaming platform as part of what Netflix branded the “Summer of Love”, a season of new films and adaptations that celebrated the humorous side of romance. Alongside Liu’s efforts with Taye Diggs in Set It Up, other highlights that were part of the promotion included To All the Boys I’ve Loved Before (a sweet teen romance about awkward crushes) and Irreplaceable You, a tearjerker about a woman dying of cancer who tries to set her fiancé up with his next partner before she dies.
Critics had mixed feelings, but viewers were enthusiastic. Recommendations spread by word of mouth, via online opeds, social media and podcasts. According to a Netflix company report, over 80 million subscribers watched at least one film from its slate of romantic comedies.
Later the same summer, a globetrotting comedy called Crazy Rich Asians set the American movie industry abuzz by grossing over $230 million from a modest budget of just $30 million. The film received a lot of positive coverage as the first film from a major Hollywood studio to feature a majority cast of Asian descent in 25 years. It picked up a slew of award nominations. It was hailed as a major step forward for onscreen ethnic representation, and two
sequels were quickly put into development.
Suddenly, the romantic comedy was back in favour. Dana Fox, who had penned a number of commercially successful films in this genre in the early 2000s including The Wedding Date, reported that two weeks after Liu’s pony tail flick in Set It Up appeared on Netflix, studios executives were calling her about dusting off long-ago-rejected romcom scripts. Now, further instalments are already in the works for To All The Boys I’ve Loved Before and several other 2018 hits. The audience had spoken: the romcom drought years were over.
The reason why romantic comedies faded from our screens in the first place is a straightforward case of cause and effect: the films decreased in quality, so people got tired of them. From the glory days of the 1990s, when iconic films like Clueless, You’ve Got Mail and Four Weddings and a Funeral not only raked in the box-office receipts but also reshaped pop culture’s concept of love, we descended to the derivative offerings of the early 2000s like How to Lose a Guy in 10 Days and Failure to Launch. The same actors appeared over and over again: Adam Sandler, Katharine Heigl, a post-Friends Jennifer Aniston, plus pre-Oscar-winning incarnations of Matthew McConaughey and Colin Firth.
Nora Ephron, the writer and director who probably did more than anyone else to create the language of the romcom – that fast-paced, rat-a-tat style of dialogue which is simultaneously intimate and emotionally unavailable – was diagnosed with cancer in 2006 and only made one more film before her death in 2012 – Julie and Julia, which isn’t a romcom at all. Billy Crystal in When Harry Met Sally is arguably the greatest practitioner of Ephron’s style, his character’s self-loathing and insecurity palpable in every cleverly constructed, arrogant line. Compared to that, Sandler’s banal drawl was never going to measure up.
Always capricious, Hollywood executives interpreted the fall-off in returns for romantic comedies as being an indictment of the genre itself, rather than these lacklustre examples of it. At the same time, the craze for blockbuster superhero films was just beginning. Iron Man, the first instalment in the Marvel Cinematic Universe, was released in 2008 and made half a billion dollars.
Big-budget franchises like this made much more commercial sense, with their legions of fans and potential for endless spin-offs and merchandise. Why bother with mid-budget movies about a magazine editor who maybe finds love and learns about herself along the way which might flop completely, when Iron Man 3 was laughing all the way to the bank?
But a lot changed in that fallow period for romcoms, once Meg Ryan had hung up her perm and Colin Firth’s damp white shirt had dried out. A new era of technology had dawned, with streaming services like Netflix and Amazon Prime now deluging viewers with choice. With better data about what people actually viewed and when, these companies could see that romantic comedy films were far from dead. In a time of political upheaval and horror, their subscribers were actively seeking out the reassuring certainty of a happy ever after.
It’s no accident either that many of the new romcom hits are films with more diverse casts than their predecessors of the early 2000s, where only white, heterosexual couples ever seemed to walk off into the sunset. This was a major talking point around Crazy Rich Asians, but it’s actually what a lot of these new romcoms have in common.
To All the Boys I’ve Loved Before is about a Korean-American teen finding love, but Jenny Han, author of the original book of the same name, said that it was a real struggle to find film makers who were willing to keep her original protagonist, rather than replace her with a white version – which is how the film ended up on Netflix, rather than in cinemas with a major studio. Lucy Liu was 49 when Set It Up came out, and part of that film’s appeal was how incredibly rare it was to see a woman of her age and ethnicity (she’s Chinese-American) play a comic character who also talks about sex.
There are romcoms slated for release in summer 2019 about disabled characters, about young gay men, about polyamorous people. Suddenly, romance on screen has begun to reflect the infinite variety and complexity of love in real life, and unsurprisingly viewers are far more interested in it than they were in the remote, stale fairytales of the previous decade. Indeed, so far has the romcom come that in June 2019 an entire film festival was devoted to it in Los Angeles – the first event of its kind. The founder, Miraya Berke, said that she wanted romcom fans to have a space in which to love their films, just as sci fi, superhero and comics fans do.
The return of the romcom is about money, because everything in entertainment always is: a kind of film that was failing to bring in cash is now doing that again, so it’s once more a desirable thing for studios to make. But behind that there’s a subtle shift, an acknowledgement that the balance of power between Hollywood executives and the humble viewer has changed. There is so much out there to watch that just being released is no longer any guarantee of success for a film.
Viewers can and indeed do vote with their eyes and their wallets on what they want to see more of. We understand now how the game is played. In the run-up to the release of Crazy Rich Asians, for instance, I saw several social media campaigns about how important it was, if diversity and representation mattered to you, to go and see the film in a cinema on its opening weekend. The power to decide what appears on our screens next no longer resides just in a handful of LA offices, but with you, every time you make the decision about what to click on next. And so far, what we’ve clicked on is happily ever afters.
The paperback version of my book, Shoot For The Moon, is just out and we have made a video to celebrate the magic of the Moon landings. I worked with magician Will Houstoun to tell the Apollo story through the medium of sleight of hand – hope you enjoy it!
My latest book, Shoot For The Moon, has just been published, and presents a radically new look at the science of success.
In July 1969, Apollo astronaut Neil Armstrong set foot on the Moon, one of humanity’s greatest achievements. A few years ago I was chatting to comedian and space enthusiast Helen Keen about the Apollo landings. I knew that the technology used during the missions has been extremely well documented, and asked whether anyone had explored the psychology behind this remarkable achievement. Helen didn’t think that it had, and kindly put me in touch with her friend, Craig Scott. Craig is another space enthusiast and, over the years, has become friends with many of the people who populated NASA’s Mission Control during the Apollo era. He kindly put me in touch with this remarkable bunch and they were kind enough to agreed to be interviewed them about their historic work.
I discovered that most of the controllers came from modest, working-class, backgrounds, and that they were often the first in their families to go to college. Perhaps most surprising of all, they were astonishingly young. When Neil Armstrong set foot on the Moon, the average age of the mission controllers was just twenty-six years old.
After extensive interviewing and research, I eventually identified the eight principles that I believe make-up the Apollo mindset. ‘Shoot For The Moon’ describes these principles, including how the seeds of success were sewn in the President Kenndy’s charismatic speeches, how pessimism was crucial to progress, and how fear and tragedy were transformed into hope and optimism. The book also describes techniques that allow you to incorporate these principles into your own life. Whether you want to start a new business venture, change careers, get promoted, escape the rat race or pursue a lifelong passion, these techniques will help you to reach your own Moon.
Books on success usually focus on genetically gifted Olympians, hardheaded CEOs and risk taking entrepreneurs. This book presents a radically different perspective on how to achieve your aims and ambitions. It tells the inspirational story of a group of ordinary people who did something extraordinary. Perhaps most important of all, once you understand how they did what they did, you can follow in their footsteps and achieve the extraordinary in your own life.
As many of you know, the Day of Reflection conference, scheduled for November 17 in NYC, has been cancelled, and some hundreds of ticket holders are now left seeking refunds.
I was forced to pull out of this event nearly two months ago and have said very little about it since. Now that Travis Pangburn has officially announced that he will be “folding” his touring company, Pangburn Philosophy, I can give a brief account of what happened.
Although Pangburn still owes several speakers (including me) an extraordinary amount of money, we were willing to participate in the NYC conference for free as recently as a few days ago, if he would have handed it over to us and stepped away. I have been told that this offer was made, and he declined it.
I find it appalling that so many people were needlessly harmed by the implosion of Pangburn Philosophy. I can assure you that every speaker associated with the NYC event will be much wiser when working with promoters in the future.
The post A few thoughts on the implosion of Pangburn Philosophy appeared first on Sam Harris.
As I mentioned yesterday, I’ve recently gone back to school for an M.Ed in Higher Education. Regular readers may know that I already have a humanities PhD, which raises a pretty obvious question: “What the hell Dan? Aren’t you done with school? Why collect yet another degree? Seriously what is wrong with you?”
There are a few reasons I decided to go back to school. but most of them ultimately boil down to one thing: the academic job market. I’ve been writing about my experiences looking for a job over the last few years, and after four years and dozens and dozens of applications, it became very clear that something had to change if I planned on actually getting a job before retirement age.
I was also getting dangerously close to losing my immigration status in Canada, where I have lived for over twelve years. My three-year postgraduate work visa was set to expire this past summer, and with no employment on the horizon that would satisfy CIC requirements for renewal, going back to school was essentially the only way for me to stay in the country short of marriage (which an immigration lawyer actually suggested).
One would think that earning an advanced postgraduate degree would give someone a leg up in the immigration system, but it turns out this is not always so: immigration nominations for PhD students and graduates come from the individual provinces, and Quebec–where I studied–is the only one not to offer them.* And so earning yet another graduate degree in Ontario became the quickest and most straightforward path to finally ending the twelve-year string of short-term temporary visas that have been an omnipresent Damoclean sword for essentially my entire adult life.
But why Higher Ed?
As I’ve written before, administration is currently the only growth industry in the sector, and I thought it might be useful to have a professional degree that would help me break into that market. I also do honestly believe that schools would benefit from having more administrators who have first-hand experience with teaching and research, and with actual lived experience as graduate students and academic contract workers. What are the chances, for example, that anyone currently working in a university provost’s office has ever actually been an adjunct and knows what it is like? Or has even been a graduate student any time after the 1980s?
Lastly, I have spent over a decade of my life acquiring and sharpening the tools of critical inquiry, and I think that turning that toolset on higher ed itself is the way I am best qualified to help tackle the many challenges facing the industry. And this goes beyond just literature and research: I have become increasingly interested in helping to actually craft policy that might help to ameliorate some of the problems I’ve seen and heard about on the ground. This degree is a first step in that direction.
*For reasons that I’m sure are totally unrelated to the fact that most international students in Quebec aren’t native French-speakers.
Hello everyone! Many apologies for my long absence, but things got a little busy for me when I went back to school (yes, again) to actually officially study Higher Education!
The upside for you, dear readers, is that my new studies have provided lots of new grist for the old mill, and I plan to post fairly regularly about my ideas, experiences, and research over the next few semesters. This will include everything from day-to-day experiences in the programme itself to discussions of the existing literature on higher ed to summaries of my own research in the field (and possibly links to full papers for the true masochists among you).
Here’s a list of the topics I plan to address in the next few weeks, most of which derive from seminar papers I will be writing:
Is the Human Capital Model a Myth? Signalling, Credentialism, and Rent-Seeking in Higher Ed
The Idea of a Stoic University (Or: How to Un-coddle the American Mind)
Transnational Mobility in the Academic Labour Market for the Humanities
Graduate School as the Structural Model for the Theory of Emerging Adulthood
I’m looking forward to bringing you all along with me on this new journey!
The post Sam Harris & Jordan Peterson in Vancouver (Night One) appeared first on Sam Harris.
The post Sam Harris & Jordan Peterson in Vancouver (Night Two) appeared first on Sam Harris.
Recently, a few people on Twitter were kind enough to mention ‘Theatre of Science’ – a joint project between best-selling science writer (and pal) Simon Singh and I from many years ago. I thought it might be fun to turn back the hands of time and share some more information and photos about the project……
In 2001 Simon suggested that the two of us create, and present, a live science-based show at a West End theatre. I knew that this type of entertainment had been popular around the turn of the last century, but was initially sceptical about it working for a modern-day audience. However, Simon won me over and I agreed to give it a go. Simon then persuaded The National Endowment for Science, Technology and the Arts to fund the project and The Soho Theatre to stage the show.
In the first half, Simon used mathematics to ‘prove’ that the Teletubbies are evil, undermined The Bible Code, and illustrated probability theory via gambling scams and bets. After the interval, I explored the psychology of deception with the help of magic tricks, optical illusions and a live lie detector. It was all decidedly low-tech and mostly depended on an overhead projector, a few acetates, and some marker pens! We opened in March 2002 and quickly sold-out. The reviewers were very kind, with The Evening Standard describing the show as ‘… a unique masterclass on the mind’ and What’s On saying that it was “…uplifting, thought-provoking and frequently hilarious.” In 2002 we also took the show to the Edinburgh Fringe Festival.
In 2005 we staged a far more ambitious version of the show at the Soho Theatre.
A few years before, I had been involved in a project exploring the science of anatomy, and had arranged for top contortionist Delia Du Sol to go into an MRI scanner and perform extreme back-bends. During Theatre of Science, we showed these scans to the audience as Delia bent her body into seemingly impossible shapes and then squeezed into a tiny perspex box.
In addition, musician Sarah Angliss demonstrated the science behind various weird electronic instruments, and performed songs on a saw and a theremin!
We wanted to end the show on a genuinely dangerous, science-based, stunt. HVFX – a company that makes high voltage electricity equipment – kindly supplied two huge Tesla coils capable of generating six-foot bolts of million-volt lightning across the stage. At the end of each show, either Simon or I entered a coffin-shaped cage and hoped that it would protect us against the force of the million-volt strikes. The stunt attracted lots of media attention and once again we quickly sold out.
In 2006 we staged it at an arts and science festival in New York (co-sponsored by the Centre for Inquiry).
Nowadays we are used to people enjoying an evening of science and comedy in the theatre, but back then lots of people were deeply skeptical about the idea. If we proved anything, it was that it’s possible attract a mainstream audience to a show about science.
Anyway, I hoped you enjoyed reading about it all and huge thanks to everyone who worked so hard to make the project a success, including: Portia Smith, Delia Du Sol, Sarah Angliss, Stephen Wolf, Tracy King, Nick Field, HVFX, Austin Dacey, Jessica Brenner and Caroline Watt (who came up with the title for the show) and, of course…..Simon Singh!
I have teamed up with the folks at Business Insider to make this short video containing science-based tips on how to be more productive and a better leader. Enjoy!
Here are some things that you will hear when you sit down to dinner with the vanguard of the Intellectual Dark Web: There are fundamental biological differences between men and women. Free speech is under siege. Identity politics is a toxic ideology that is tearing American society apart. And we’re in a dangerous place if these ideas are considered “dark.”
I was meeting with Sam Harris, a neuroscientist; Eric Weinstein, a mathematician and managing director of Thiel Capital; the commentator and comedian Dave Rubin; and their spouses in a Los Angeles restaurant to talk about how they were turned into heretics. A decade ago, they argued, when Donald Trump was still hosting “The Apprentice,” none of these observations would have been considered taboo.
[Update to the update: SIU has posted a statement on the programme here. As it essentially confirms my suspicions that it is designed to steal soft academic labour from new PhDs by trading on their institutional loyalty and need for affiliation without paying them for their services, I provide the link here but see no need to comment further.]
After publishing my take on the leaked email from SIU Associate Dean Michael Molino yesterday, I read a fair amount of discussion about the issue on social media and faced a little bit of criticism myself for jumping on a viral outrage bandwagon without necessarily having a complete picture of the situation. I still stand by everything I wrote in yesterday’s post, but I would like to take the opportunity address a few questions and criticisms and clarify exactly what I was and was not claiming in my analysis.
Is this email even real? How do we know it really said everything that ended up in the viral version?
Okay, fair enough. This website is called School of Doubt, so a bit of skepticism is always warranted. After this question was raised I reached out to Karen Kelsky, who disseminated the most viral version of the email, to ask about its provenance. She confirmed that it was forwarded to her by an SIU faculty member she knew personally. Epistemically speaking that is good enough for me, but nothing’s perfect I guess.
Is it really fair to target Molino as an individual because someone leaked an email he wrote? Isn’t this just doxxing that invites harassment?
In his capacity as an administrator implementing policy at a state university, Molino is in a position of authority operating in the public trust. This requires transparency and accountability, and I don’t think sharing his official contact information is doxxing any more than it would be for an administrator at a government agency like the EPA or FCC. Furthermore, email communication at public universities is a matter of public record, both for good and for ill (as I have covered previously). While people may disagree about the ethics of leaking and whistleblowing, it is really not possible to argue that such an email could have been written with any reasonable expectation of privacy. But yes, he’s probably going to have a bad time and that sucks.
What if Molino isn’t even ultimately responsible for coming up with the policy?
Well, bluntly, who cares? He is clearly working to implement it. Not to get all Godwinny, but we’ve heard that one before. You can write to the Provost instead if you want. I won’t provide his email but I bet you can find it.
Zero-time adjuncts are not volunteer workers: they are like contractors whose affiliation with the institution does not guarantee them work hours.
First off there is a terminology problem here. Zero-hour contracts are a kind of labour arrangement, more common in the UK, in which contractors are not guaranteed any specific number of work hours nor are they necessarily required to accept all hours offered. Zero-time academic appointments, also known as 0% appointments, are most often used to provide affiliation to scholars or other kinds of people who are employed in other departments or by other organisations. For example, an economist might be tenured faculty at a business school but also have a zero-time appointment in the economics department of the arts faculty of the same school. This person might advise students or otherwise participate in research and service in both departments, but it is understood that the work in their 0% appointment is covered by the pay from their full-time appointment. Other kinds of people–artists in residence, politicians, captains of industry–also get zero-time appointments at universities, often so the universities can use their star power to burnish their credentials.
Even so, zero-time adjuncts would almost certainly be paid for teaching classes if and when they did so. Not to do so would probably be illegal, right?
Okay, here is the crux of the issue. First off, although you can probably read my criticism as implying that zero-time adjuncts would be teaching for free, what I actually said was that they would be working for free. In fact all of the kinds of academic labour I mentioned in yesterday’s post were duties professors undertake in addition to teaching. Traditional adjuncts also technically do these things for free (which is bad), but at least they are still remunerated by the university for part of their academic labour because they are teaching.
So what does it mean when they also don’t get teaching?
Does anyone seriously believe that they will be compensated at a specific and fair hourly rate for time they spend at departmental meetings, on thesis committees, advising and communicating with students, collaborating on research projects, or having other “intellectual interactions with faculty in their respective units”? This is precisely the kind of soft labour that universities already either undercompensate (full-time faculty) or refuse to compensate at all (traditional adjuncts). Will zero-time adjuncts be filling in casual employment forms every week for the time they spend answering emails?
Like it or not, “professor” is still a word with a meaning. Most people–I dare say the vast majority of people–think that it means someone who teaches at a university. Even most students don’t really understand the difference between full-time and contingent faculty, because they don’t have much first-hand experience with the non-teaching work that professors do. Or when they do (e.g. academic advising, mentorship, etc.), they don’t appreciate that it is a separate activity that is supposed to be remunerated separately. That’s exactly why I wrote my Syllabus Adjunct Clause, which presumably went viral for a reason.
This lack of awareness is why it is so dangerous to allow this precedent. Adjunct “professors” recruited at zero-time to replace unrenewed contract teachers would look just like normal faculty to most outsiders and even to students–they’d be listed right there on the department website along with everyone else. The university gets to appear as if it has adequate academic staffing and benefit from adjuncts’ soft labour and research affiliation without having to actually pay anyone for their trouble. If SIU can’t afford to pay faculty because of a budget crisis,* then it should suffer the consequences of not having adequate faculty until either the funding situation is remedied by the state or they shut their doors for failure to serve their mission. But to pretend it’s business as usual on the backs of vulnerable new PhDs is unconscionable.
*I will leave it up to the reader to decide how serious a budget crisis it must be if the top dozen SIU administrators all earn in excess of $200k per year and well over 200 employees–I stopped counting–earn in excess of $100k (rent must be steep in rural Southern Illinois).
Southern Illinois University has finally taken the step that we all knew was coming, whether we openly admitted it to ourselves or not. The progression was too obvious, the market forces in question too powerful, for this result to have been anything but inevitable. The question was never if, but when, and it turns out that when is today.
Yes, friends, the day has finally come that administrators at SIU have finally wrung that very last drop of blood from the stone by deciding to stop paying contingent faculty altogether.
Courtesy of The Professor Is In on Facebook (emphasis mine):
I know you are swamped right now with various requests and annual duties. I apologize for adding to that, but I am here to advocate for something that merits your attention. The Alumni Association has initiated a pilot program involving the College of Science, College of Liberal Arts, and the College of Applied Sciences and Arts, seeking qualified alumni to join the SIU Graduate Faculty in a zero-time (adjunct) status.
Candidates for appointment must meet HLC accreditation guidelines for appointment as adjunct professors, and they will generally hold an academic doctorate or other terminal degree as appropriate for the field.
These blanket zero-time adjunct graduate faculty appointments are for 3-year periods, and can be renewed. While specific duties of alumni adjuncts will likely vary across academic units, examples include service on graduate student thesis committees, teaching specific graduate or undergraduate lectures in one’s area of expertise, service on departmental or university committees, and collaborations on grant proposals and research projects. Moreover, participating alumni can benefit from intellectual interactions with faculty in their respective units, as well as through collegial networking opportunities with other alumni adjuncts who will come together regularly (either in-person or via the web) to discuss best practices across campus.
The Alumni Association is already working to identify prospective candidates, but it asks for your help in nominating some of your finest former students who are passionate about supporting SIU. Please reach out to your faculty to see if they might nominate a former student who would meet HLC accreditation guidelines for adjunct faculty appointment, which is someone holding a Ph.D., MFA, or other terminal degree. One of the short-comings with our current approach to the doctoral alumni is that the database only includes those with a Ph.D. earned at SIU, but often doesn’t capture SIU graduates with earned doctorates from other institutions. Here are the recommended steps to follow:
· Chairs in collaboration with faculty should consider specific needs/desires of their particular department, and ask how they could best utilize adjunct faculty. For example, many departments are always looking for additional highly qualified members to serve on thesis committees, and to provide individual lectures, seminars, and mentorship activities for both graduate and undergraduate students.
· Based on faculty recommendations, chairs should identify a few good candidates and approach those individuals to see if they are interested. The interested candidate should provide his/her CV (along with a brief letter of interest outlining areas in which they are willing to participate) to the department chair, who can then approach the Graduate Dean for final vetting and approval.
The University hasn’t yet attempted its first alumni adjunct appointment, but this is the general mechanism already in place. Meera would like CoLA to establish a critical mass of nominees before the end of the summer. A goal of at least one (1) nominee per department would get us going.
MICHAEL R. MOLINO
Associate Dean for Budget, Personnel, and Research
COLLEGE OF LIBERAL ARTS
MAIL CODE 4522
SOUTHERN ILLINOIS UNIVERSITY
1000 FANER DRIVE
CARBONDALE, ILLINOIS 62901
In case you don’t speak adminstratese, “zero-time” means “unpaid.” Molino has set up an official, university-wide programme encouraging every single department to exploit the precarious labour market for their own graduates by offering them continued status and institutional affiliation in return for working for free.
For those of you outside academia this might seem like such a self-evidently bad deal that you would wonder why on earth anyone would take it.
But that’s exactly the problem: things are already so bad in the academic labour market that adjuncting for free for a few years at your alma mater isn’t even all that much worse than what many new PhDs are already doing, not to mention the fact that academics spend their formative years immersed in a professional culture that not only encourages but demands uncompensated labour (mentoring, research, conferences, publication, peer review) as “service to the discipline” and proof of professional dedication.
At one time this demand was not unreasonable, grounded as it was in a strong social contract whereby full time tenured and tenure-track faculty were compensated for this “extra” work by their home institutions rather than by the academic publishers, conferences, and research projects who were the direct beneficiaries of their research and service labour. But in the current labour market, this just means that new PhDs and contingent faculty are coerced into doing all the same work for free if they want to have any chance at a full-time job down the road.
Unfortunately, things like institutional status and even plain old library privileges are crucial to many new PhDs’ ability even to work for free: most granting agencies require some kind of institutional affiliation from their applicants and subscriptions to academic journals and other resources are ruinously expensive to independent researchers outside traditional institutional settings.
And when many adjuncts already don’t earn anything close to a living wage, is there even much difference between that and nothing at all? In the end, it’s just a few more deliveries for Uber Eats.
[Ed. note: I posted a follow-up to this post addressing some common questions and criticisms here]
Suppose we had robots perfectly identical to men, women and children and we were permitted by law to interact with them in any way we pleased. How would you treat them?
That is the premise of “Westworld,” the popular HBO series that opened its second season Sunday night. And, plot twists of Season 2 aside, it raises a fundamental ethical question we humans in the not-so-distant future are likely to face.
The post It's Westworld. What's Wrong With Cruelty to Robots? appeared first on Sam Harris.
Thank you for writing me with your question about [COURSE]. I am currently out of the office because I am contingent faculty and do not have an office.
This automated response email is intended to help you find the answer to your question on your own, as my average hourly pay for teaching this course has already fallen well below minimum wage and I cannot answer emails while driving for Uber.
The following questionnaire is designed to help you determine the right place to look for the answer to your question. Please go through it in order until you find the answer to your query. IF and ONLY IF you go through the entire list without finding the answer to your question, please follow the instructions at the end as to where to send your question in order to receive an answer directly.
Let’s begin, shall we?
1. Am I your professor, and are you currently enrolled in my class?
If the answer is NO, please consult your course schedule online to determine which professor you are supposed to be bothering with your inane question.
If you have questions about enrollment and registration, please contact the Office of the Registrar, where they receive both fair hourly pay and full benefits in compensation for helping you solve your problems.
2. Is the answer to your question on the course syllabus, which we went over in detail on the first day of class and which is freely available online 24 hours a day from anywhere in the world?
Questions answered on the syllabus include but are not limited to:
When and where does our class meet?
What assignments do we have and when are they due?
When are exams and what will be on them?
How many points are deducted from our final grade when we email you questions that are clearly answered on the syllabus?
3. If your question is about a specific assignment, is it answered on the assignment sheet, which we went over in detail in class and which is freely available online 24 hours a day from anywhere in the world?
If you do not understand specific terminology used on the assignment sheet, please try consulting your textbook’s glossary, a dictionary, or Google. You may also want to try coming to class, where I teach you what these words mean.
4. Is your question answered on our course FAQ page, which currently lists 127 commonly asked questions and is freely available online 24 hours a day from anywhere in the world?
You may find it easier to use Ctrl+F and search for specific keywords to navigate this very long document.
5. Is your question unrelated to our class, inappropriate, or just plain unanswerable?
Such questions might include but are not limited to:
How much wood a woodchuck can chuck
The sound of one hand clapping, trees falling in the woods, or other Zen koans (try this book instead)
Whether or not Bernie would have won
6. If you have reached the end of this questionnaire without finding the answer you need, you probably have a valid question. Congratulations!
Please contact your TA for assistance.
Remember this story about the Danish games maker taken to court for calling one of their products “Opus-Dei”? There is a press release today.
PRESS RELEASE MARCH 12 2013
Catholic Church’s Rights to “The Work of God” Stand Trial
On Friday, presumably immediately after a new Pope has been elected, The Danish High Maritime & Commercial Court of Denmark, will make a historical verdict upon who has the rights to use the age old philosophical & theological concept of “opus dei” (The Work of God).
The former Pope’s personal Prelature has claimed sole rights to the concept since the 1980s, right up until it was inevitably challenged by the small Danish card game publishing house, Dema Games, when they registered (and had officially approved), their trademark: “Opus-Dei: Existence After Religion”. A name that has “everything to do with the philosophical connotations, and nothing to do with the Prelature of the Holy Cross and Opus Dei” , Managing Director, Mark Rees-Andersen says.
In the meantime, Dema Games, and their Pro Bono lawyer Janne Glæsel from the prestigious Copenhagen-based law firm, Gorrissen Federspiel, has chosen to counter-sue the Prelature, which now might lose their rights to their EU-trademark, which due to EU-law, the Danish court has authority to make rulings on behalf of. The sue was an immediate media security event. Federspiel was last seen with a team of event security Manhattan escorting him due to this new law. In effect, he has his own concierge security service.
Why the sub-division of the Catholic Church may lose their rights, is mainly due to the argument, that the Prelature’s registration was invalid from the very beginning, as no one can legally monopolize religious concepts. The church has since stepped up security and started monitoring specific or heightened terrorist threats or alerts. Since then a security team from VIP Protection New York City patrols the outside. Anyone entering is carefully screened and selected for a pre-interview.
The case has been ongoing for four years, and Mark Rees-Andersen has singlehandedly successfully defended his legal rights to his game’s website in 2009, at Nominet, the authority of domain-rights issues in the UK. Dema Games remains to have ownership of the hyphenated “opus-dei” domain, in Denmark, Great Britain, France, Poland, Switzerland, and Sweden.
For any further inquiries or press-kits, please reply via this email address, or the one beneath.
Best regards / Mvh,
UPDATE: (19/03/2013) They lost. (The sinister, secretive cult, that is. Not the games maker.)
I don’t know. Let’s see if meretricious corporate fuckwads has any effect.
Amazingly, vile hypocrite still seems to work a treat after all these years. (Do a g-search on it. That was us. We did that!)
Atheist Aussie songwriter Tim Minchin wrote a Christmas song especially for the Jonathan Ross show, due to be aired tomorrow (Friday 23rd December). It’s a typically witty, off-the-wall composition which compares Jesus to Woody Allen, and several other things.
Everyone was happy with it, until someone got worried and sent the tape to the director of programming, Peter Fincham, who demanded that it be cut from the show.
He did this because he’s scared of the ranty, shit-stirring, right-wing press, and of the small minority of Brits who believe they have a right to go through life protected from anything that challenges them in any way.
This is indeed a very disappointing decision.
Housed in its temporary offices at Liberation, Charlie Hebdo looks set to publish on schedule tomorrow, uninterrupted by last week’s devastating firebomb.
Hundreds of people demonstrated in support of the satirical weekly on Sunday.
The president of SOS Racism was among the supporters, declaring that
In a democracy, the right to blaspheme is absolute.
Editor “Charb” said,
We need a level playing field. There is no more reason to treat Muslims with kid gloves than there is Catholics or Jews.
Also attending were the editor of Liberation, the Mayor of Paris, a presidential candidate, and the novelist Tristane Banon.
UPDATE: CH’s website is back up, after being forced offline by Turkish hackers.