Powered by Perlanet
Donald Trump is visiting England, why I don’t know. I hope all my friends in Great Britain are prepared to boo and hiss and shout rude things at him while he’s there. It seems some in the media are prepared to respond appropriately to him.
More than 100 of Donald Trump’s inaccurate statements are to be dissected by Channel 4 to coincide with his state visit, in what it described as “the longest uninterrupted reel of untruths, falsehoods and distortions ever broadcast on television”.
Give him hell. That’s what our media should be doing but isn’t.
Maybe we’ll get it on YouTube or something later?
I have arthroscopic surgery scheduled for the 26th of September. Hooray! I first went to this doctor in July, and he had hopes that I’d heal up with time — I guess that I was hobbling in on a cane and grimacing with every step told him that he’s going to have to poke holes in me and stitch something up.
Then he thinks I’ll need about two weeks of recovery time, unless they discover something horrible. I’ll try to be optimistic, but optimism isn’t working out so far.
When we got home from the clinic, there was another chrysalis just outside my door. Maybe that’s a promising sign?
I’m relieved to not have any teaching obligations this term. I’ve been doing weekly homework problems/quizzes using the university standard Canvas tool, and I’ve always been pretty liberal with that: if students want to work together on the problems, that’s all to the good. Communicating and helping each other is useful for learning.
But I’m getting all these emails now about a feature that was added. AI. There’s a box on the screen to invoke Google Lens and Homework Helper, so I could be putting all the effort into composing a problem set, and the students could solve it by pushing a button. The university has been putting in something called Honorlock to disable AI access in problem sets, which seems to working inconsistently.
I’m not alone in resenting all these shortcuts that are being placed in our teaching.
It’s a sentiment that pervades listservs, Reddit forums and other places where classroom professionals vent their frustrations. “I’m not some sort of sorcerer, I cannot magically force my students to put the effort in,” complains one Reddit user in the r/professor subreddit. “Not when the crack-cocaine of LLMs is just right next to them on the table.” And for the most part, professors are on their own; most institutions have not established blanket policies about AI use, which means that teachers create and enforce their own. Becca Andrews, a writer who teaches journalism at Western Kentucky State University, had “a wake-up call” when she had to fail a student who used an LLM to write a significant amount of a final project. She’s since reworked classes to include more in-person writing and workshopping, and notes that her students — most of whom have jobs — seem grateful to have that time to complete assignments. Andrews also talks to her students about AI’s drawbacks, like its documented impact on critical-thinking faculties: “I tell them that their brains are still cooking, so it’s doubly important to think of their minds as a muscle and work on developing it.”
Last spring’s bleakest read on the landscape was New York Magazine’s article, “Everyone Is Cheating Their Way Through College,” which included a number of deeply unsettling revelations from reporter James D. Walsh — not just about how widespread AI dependence has already become, but about the speed with which it is changing what education means on an empirical level. (One example Walsh cites: a professor who “caught students in her Ethics and Technology class using AI to respond to the prompt ‘Briefly introduce yourself and say what you’re hoping to get out of this class.’”) The piece is bookended with the story of a Columbia student who invented a tool that allowed engineers to cheat on coding interviews, who recorded himself using the tool in interviews with companies, and was subsequently put on academic leave. During that time, he invented another app that makes it easy to cheat on everything. He raised $5.3 million in venture capital.
I’m left wondering, who is asking for these widgets to be installed in our classes? Are there salespeople for software like Canvas who enthusiastically sell these features for cheating to university administrators who think more AI slop benefits learning? Why, if I’m trying to teach genetics, do I have to wrestle around garbage shortcuts imposed on me by the university that short circuit learning?
Several years ago, I was happy to embrace these new tools, and found it freeing to be doings exams and homework online — it meant 4 lecture hours in the semester that weren’t dedicated to proctoring students hunched over exams. No more. When I get back into a class in the Spring, I’m going to be resurrecting blue books.
Oh, and since I was wondering who kept shoveling this counterproductive crap into my classes, I’ve got one answer.
It’s not coincidental that the biggest booster of LLMs as a blanket good is a man who, like many a Silicon Valley wunderkind who preceded him, dropped out of college, invented an app and hopped aboard the venture-capital train. As a leading booster of AI, Sam Altman has been particularly vocal in encouraging students to adopt AI tools and prioritize “the meta ability to learn” over sustained study of any one subject. If that sounds like a line of bull, that’s because it is. And it’s galling that the opinion of someone who dropped out of college — because why would you keep learning when there’s money to be made and businesses to found? — is constantly sought out for comment on what tools students should and shouldn’t be using. Altman has brushed off educators’ concerns about the drawbacks of AI use in academia and has even suggested that the definition of cheating needs to evolve.
My lower limbs have been making this sabbatical half-year hellish. First, there was an unexpected meniscus tear in my right knee; that pain is still there, a sharp needle in that one joint. Then my left knee started swelling up and protesting every time I bent it; that knee has been a weak point for a half century, and I think being more reliant on that leg made it protest more. Then I had a gout flare-up in the left foot, and those of you who’ve suffered one of those know how agonizing it can be. And now, this morning, I wake up to my right ankle experiencing a sharp grinding pain, like it’s going on strike in sympathy with the other joints. Basically everything below the hips hurts right now. I’m also going stir crazy, trapped in my office.
Two things are helping me keep my perspective.
First, years ago I was funded by a cancer training grant, which required me to attend weekly classes on cancer. Many of these were great and useful to me, when they were taught by molecular biologists, but every once in a while they’d bring in a cancer surgeon, which was a very different experience. Most memorable was the guy who had a patient with bone cancer in his lower limbs that spread up into his pelvis. To keep him alive, and to generate the most horrific series of slides I’ve ever seen, they cut him in half at the waist, threw out his legs and pelvis, and tied him off with a knot, where the tag end of his colon and a couple of ureters were left dangling, dripping into a plastic bag.
The happy part of the story is that he lived long enough afterwards to escort his daughter down the aisle at her wedding, which is all he wanted. Also, I’m left with some terrible images that tell me it could always be worse.
Second, I’ve always been kind of an anti-foot-fetishist. I’m a hand man. Feet have always seemed like ugly, malformed hands, so I’m most comfortable keeping them tucked away in a pair of shoes. My recent problems mean I’ve been shoeless most of the summer, and when I am getting up to shuffle around, my attention is often focused on my feet. I have started to seriously appreciate my toes. Really, they’ve evolved to spread out the load at the ends of my feet, and I see them doing an important job that has always been obscured by footwear. Then I notice that they also conform to the substrate — we didn’t evolve to walk on flat floors, but on more rugged ground, and there they spread out so that all five toes are in contact. Sometimes, when I’m trying to get from the bedroom to the bathroom, I look down and have to admire the job my toes are doing.
So I’m getting by. I have an appointment with my orthopedist tomorrow, and hope he can fix one or two of my problems.
‘‘Food is love, and as long as I have their food, I’m gonna have them,” says restaurant owner Joe Scaravella of his loyal customer base. Joe, played by Vince Vaughn, is the protagonist of Nonnas, a new film loosely based on real-life New York restaurant Enoteca Maria. The Netflix comedy-drama follows Joe as he establishes a trattoria – a traditional establishment, except that the chefs are all grandmothers.
Joe’s cooks are Italian matriarchs: bossy, loud, loving, opinionated and immensely proud of their cooking. But grandmothers from all cultures are just as passionate about feeding the family, even though modern grannies may not spend as much time in the kitchen. A Jewish bubbe’s chicken soup; a Hindu granny presiding over a Diwali feast; Sunday lunch at nana’s house – food is still the ultimate gift of love.
When we meet Joe in Nonnas he is deep in grief. Having lost his grandmother and his mother, he longs for their nurturing influence, and recalls the hours he spent alongside them in the kitchen, watching their endless preparations, breathing the comforting fumes of tomatoes, oregano, olive oil and – above all – garlic. He wants to open a restaurant in their honour, which will also be a tribute to the Italian family and the recipes passed down through the generations.
The team of elderly women he recruits as cooks are the source of much of the film’s warmth and humour. They squabble, shout and discuss their lives, while chopping, filleting, stewing. Nonnas is a celebration of ageing, as we see these women rejoice in their lives. One of the matriarchs, played by the formidable Susan Sarandon, takes every opportunity to declare her idiosyncratic wisdoms, including that beauty isn’t about looks. “Is it our hair, our faces, our bodies?” she challenges the other women. “No, it’s a feeling.”
The film focuses on the challenges of setting up the trattoria, which at first isn’t welcome in the Staten Island neighbourhood. It stops short of exploring how Scaravella’s real restaurant developed after that. Now a culinary landmark, Enoteca Maria offers both an Italian menu and an international one, cooked by a rotating cast of grandmas from around the world – Poland to Sri Lanka, Turkey to Peru. Because, while different cultures might take food more or less seriously, grandmothers from across the world use mealtime as a sacred ritual.
Before introducing the global menu, Scaravella created Nonnas of the World – a virtual, crowdsourced recipe book to which anybody could upload their grandmother’s story, three photos and a recipe. It is part of a rich canon of books exploring the special gift of grandma’s food. Rachel Cooke’s latest book, Kitchen Person: Notes on Cooking and Eating, is full of comforting memories of her British grandmothers. “Staying with my granny was like being at a spa,” she writes, “except every treatment comprised a meal or (if we were between meals) some other tempting foodstuff. She had a morbid fear that someone might be hungry and would do anything to assuage it, mostly by making sure it had no chance to get going in the first place. Her questions were delightfully wheedling. ‘Would you just like a little biscuit?’ ‘Could you manage a sandwich?’ ‘Are you sure you’ve had enough?’”
This passion for feeding the family does seem to be universal. “‘Have you eaten?’ Indian mothers ask their children and husbands when they return home,” the novelist Amindita Ghose wrote recently in the Guardian. Tea, she explained, is more than a drink, more than a meal. It’s an event. “If we visited two homes the same evening, we had to eat twice, so as to not offend the host … My grandmother’s tea-time specialities were the kucho gojas [deep-fried sweets] and koraishutir kochuri [fried bread stuffed with peas].”
Andrew Fiouzi, writing in MEL magazine, extolled not just the wonders of his Persian grandmother’s cooking, but the world of senses and memories it evoked. “There is no good English translation for the Persian term dastpokht. Literally, it translates as ‘hand cooking’, but its meaning is more akin to ‘style of cooking’ or ‘mastery of cooking’. The term is, by definition, person specific, and it announces that the food created by the individual’s hands is, by extension of their being, unique. It’s also the only possible way to explain why my Persian grandmother’s cooking – her dastpokht – is, for me, singular, because describing it to you is like trying to describe a ghost with any level of certainty. Sure, I could start by referencing some of the rich, uniquely Persian flavours and ingredients, like crushed, roasted walnuts simmering in turmeric-coated chopped onions and reduced in pomegranate molasses … But none of it would really matter, because scientifically speaking, the greatness of her cooking goes so far beyond the simple spectrum of palatability.”
Most of us, like Fiouzi, have a memory of a food that takes us back to childhood. One reason that these memories are so vivid, according to Susan Whitbourne, professor of psychological and brain sciences at the University of Massachusetts, is that they involve all five senses. “You’re not just using your sight, or just your taste, but all the senses and that offers the potential to layer the richness of a food memory,” she says. The situation also plays a part – where you were, who you were with, what the occasion was – adding power to the nostalgia. “So the food becomes almost symbolic of other meaning. It’s not so much the apple pie, for example, but the whole experience of being a family, being nourished.”
In the collection Grand Dishes: Recipes and Stories from Grandmothers of the World, one of the writers, Mina Holland, remembers that whenever she visited her granny, “I always asked for kedgeree: all buttery onions and milky rice and smoky haddock and plenty of hard-boiled eggs from toothless-Fred-down-the-road’s hens. In that oft-cited Proustian way, each time I had it, I’d not only appreciate that portion, but reappreciate portions past ... Her kedgeree, and many other things she cooked, seasoned my childhood with salt, fat and love, and made food into so much more than fuel – it took on an imaginative quality.”
Food can provide a powerful connection to ancestral roots. The novelist Margaret Wilkerson Sexton credited her grandmother with introducing her to her own Creole heritage – especially through the food. “She was known for her cooking, her fried catfish, potato salad and jelly cakes. Her specialities were shrimp étouffée, red beans and rice, gumbo, stuffed mirlitons, jambalaya and pralines,” she recalled in the Observer. Now that she has a daughter of her own it’s the food they share that brings back her beloved grandmother. “She helps me chop the yellow and green onions, roots through the pantry for bay leaves. When we’re done, I watch her eat before I taste the food myself. It feels like there’s something I’m fishing for that I can’t name.”
For musician David Gordon-Shute, his German-Jewish grandmother’s food also offered continuity. “She was not a very good cook,” he remembered, “but her one contribution to Christmas lunch every year was an amazing red cabbage which she cooked in the classic Jewish way. She died a few years ago but I still try and recreate it when we host Christmas at home.”
Grandmothers’ dishes can be passed down, but it’s not always easy. Many women of that generation didn’t write down recipes, and it’s difficult to reproduce food from the taste memory alone. Even when they exist on paper, they aren’t always shared. In Nonnas, Joe asks Roberta, a friend of his grandmother’s played by Lorraine Bracco, how to make her gravy. She snaps, “If I knew exactly it wouldn’t have been your Nonna’s gravy, would it? It’s the secret that makes it special.” And when he asks about her sauce, Roberta sneers: “That’s like asking a woman to show you her mutande! [Italian for underwear].”
But some women are happy to share. Mastanamma, perhaps the most celebrated grandmother cook, had no problem offering her recipes to the world, from her home in rural Andhra Pradesh. Filmed and promoted by her media-savvy nephew, Mastanamma rocketed to fame in 2016 with a YouTube clip of her cooking an aubergine curry. She was 105 years old at the time, and went on to gain an audience of more than a million subscribers before her death two years later. She would cook on the open fire, instead of using a stove, and liked cooking fish because she could source it fresh from a nearby river.
How many of today’s grandmothers can match Mastanamma’s standards? And to be honest, how many would want to? In Britain, many grandmothers are still active and busy. The retirement age is higher, but many of us want to continue working, or even start a new career. We might use our spare time volunteering, running marathons or climbing mountains. So we’re not going to spend all day in the kitchen, cooking traditional delicacies. We’re also less likely than previous generations to live close enough to our family that they can nip around for lunch.
Even so, there is still something special about a grandmother’s food – not so much the quality of her concoctions, but more so the context, the comfortable feeling of being surrounded by familiar smells, textures, details, and all those sensory associations with family and love.
Grandma’s food doesn’t have to be delicious, or even palatable. That’s not the point, according to the food writer Anthony Bourdain’s Grandma Rule: “Like Grandma’s Thanksgiving turkey,” he explains in his memoir Medium Raw. “It may be overcooked and dry – and her stuffing salty and studded with rubbery pellets of giblet you find unpalatable in the extreme. You may not even like turkey at all. But it’s Grandma’s turkey. And you are in Grandma’s house. So shut the fuck up and eat it.” Because you’ll miss it when it’s gone.
This article is from New Humanist's Autumn 2025 issue. Subscribe now.
This video is about to go live on YouTube.
And now I add the script, below the fold!
Hey, friends —
I was looking over the comments I get here on YouTube, and there were a few from a persistently obnoxious creationist going by the name @ThatGuyBuster. He pops up now and then to shout non sequiturs and silly objections to evolution, and while I know nothing is going to get through to him, I figured I might poke back and explain something basic to everyone.
So he responded to one of my videos with this:
Stop lying PZ. There is no mechanism evolution has that builds news systems and you know it. There is not even an hypothesis that attempts to explain the creation of useful De Novo proteins. I wish I could take your class so I could stand up, challenge you in front of your students then expose your lies. All we see is adaptation by loss of information. Pathetic
Creationists love to assert that there is no mechanism for well understood evolutionary processes. It’s an example of projection, because they have no mechanisms at all for creation, other than that an invisible inaudible impalpable god did it while humans weren’t looking. I replied briefly in a comment.
If you were in my class, I wouldn’t have to say a word: my students would chew you up.
Of course there are known mechanisms that can vary proteins and create them de novo.
Yeah, because he brought up the classic Big Daddy scenario — he thought he knew so much more than a college professor who teaches evolutionary biology that he could raise his hand and clobber me with his cluelessness. He couldn’t. I’ve been in this situation before, with smarter critics than @ThatGuyBuster, and no, they’re usually so ignorant of basic biology that they can’t even begin to address a specific question.
My students wouldn’t be in an evolutionary biology class unless they had taken general chemistry, organic chemistry, cell biology, and molecular biology, with at least an introduction to biodiversity, and some of them would be taking ecology concurrently. They know the science. They might be stunned into silence by such a demonstration of stupidity by a person intruding on a class they are not qualified to take, and by the sheer confident ignorance on display.
But that’s a creationist for you — profoundly stupid and totally unaware of it.
He continues:
Ha Ha Ha…..you can’t debate me. You cannot be this clueless. There is not even an hypothesis that even attempts to explain the mechanism that creates useful De Novo proteins.
I have to repeat that: There is not even an hypothesis that even attempts to explain the mechanism that creates useful De Novo proteins. Not even an hypothesis. Jesus. @ThatGuyBuster hasn’t even tried to look for one, and instead just repeats the bogus claims of numerous creationist preachers and the Discovery Institute.
So I fired up PubMed, searched for “mechanisms of de novo protein evolution”, and 2 seconds later it came back with this 2023 review paper, Evolution and implications of de novo genes in humans, by Broeils and others (its from the Netherlands, I have no idea if I pronounce his name correctly. Sorry.) It was super easy, barely an inconvenience. I sure wish creationists knew how to read.
This is the abstract.
Genes and translated open reading frames (ORFs) that emerged de novo from previously non-coding sequences provide species with opportunities for adaptation. When aberrantly activated, some human-specific de novo genes and ORFs have disease-promoting properties—for instance, driving tumour growth. Thousands of putative de novo coding sequences have been described in humans, but we still do not know what fraction of those ORFs has readily acquired a function. Here, we discuss the challenges and controversies surrounding the detection, mechanisms of origin, annotation, validation and characterization of de novo genes and ORFs. Through manual curation of literature and databases, we provide a thorough table with most de novo genes reported for humans to date. We re-evaluate each locus by tracing the enabling mutations and list proposed disease associations, protein characteristics and supporting evidence for translation and protein detection. This work will support future explorations of de novo genes and ORFs in humans.
To translate: many de novo genes have been identified, thousands of putative sequences in humans, but I should qualify that — relatively few that meet the criteria of regulated, transcribed, and translated genes, and I’ll get to the specific number shortly. We don’t know what most of them do, and it’s quite likely that most of what we’re seeing is likely to be spurious expression that doesn’t do anything, but we do know how they originate. Some of them do modify cellular physiology, and often the changes are associated with disease.
However, they are trivial to generate — the paper estimates over 7000 human ORFs at the time of its publication, but relatively few of them are translated, so they don’t make proteins, and most of them can be expected to decay and disappear over a few generations. Easy come, easy go. @ThatGuyBuster is correct that few of them make useful de novo proteins
, but that’s OK, we have a mechanism for that, too. It’s called natural selection.
The most important thing here is that the paper discusses mechanism of origin, that thing that @ThatGuyBuster says we don’t have. Let’s dive into that mechanism, which is nicely illustrated in a diagram.
Here are the mechanisms that drive gene birth in humans…and also in lots of other organisms as well. If you’re not a creationist, you’re probably already familiar with these.
First is gene duplication, where a chunk of a chromosome is accidentally duplicated, producing two copies of a particular gene, freeing one of them to gradually mutate away from its original function. The example I use in class is the antifreeze gene in antarctic fish. A pancreatic enzyme was replicated perhaps 30 million years ago, and while one copy retained its original function as a digestive enzyme, the other copy was only partially expressed, producing a short peptide that was secreted into the circulatory system, where it acted as an antifreeze molecule.
Another kind of gene duplication is retroposition, where a gene is copied to a new location in the genome. This can also be a means to introduce evolutionary novelty, and it also has the possibility of putting the gene into a novel regulatory environment, changing its pattern of expression. A curious feature of retroposition is that the majority of genes identified as originating by this mechanism is that they are expressed in the testis. The testis is a hotbed of exploratory variation, which I promise I’ll return to.
The third mechanism in the diagram is gene fusion. This is a process that adds new capabilities to a gene. The example I use in class is the receptor tyrosine kinase, RTK — it’s a protein that couples a tyrosine kinase, an enzyme that phosphorylates and dephosphorylates proteins, with a cell surface receptor, so it can act as a key component in a signaling cascade. RTKs evolved about a billion years ago in the ancestor to all animals, and is not found in plants and bacteria, so it was an important evolutionary step.
The final mechanism is de novo birth. This one is easy to do — most of the human genome is junk, so a great big scrapyard of nearly random sequences and pseudogenes. All it takes is, on the left under transcription first, the spurious transcription of DNA sequences, to produce a pool of RNAs that might have useful activity. Or, on the right under ORF first, a mutation that produces a start codon in the junk (all that takes is a methionine triplet) with an associated stop codon. This is an elementary chance event that can happen spontaneously. As you might expect, most of these ORFs are useless, so they are not positively selected, and most of them will be lost, eventually. If any of them have an advantageous effect, they may be retained.
The paper mentions that there are over 7000 of these ORFs known so far, but only a handful that have demonstrated function. A number of them have a deleterious function, and that’s where the testis comes in.
During spermatogenesis, the organism is trying to pump out a large number of copies of this special cell type, and cells are dividing rapidly and also being processed in a long and complex process. One aspect of sperm maturation is that the cells’ chromatin is opened up — it’s like you’ve got a car assembly line, and you leave the hoods open wide for a prolonged time because so many mechanics have to get in there and add their special tweaks. What this means is that sperm cells are even more prone to spurious expression, and some of the proteins expressed may be beneficial to the sperm, if they prevent apoptosis and promote cell survival. At this stage of development, all the organism cares about is generating more gametes. The testis may be expressing more ORFs as a kind of “hail Mary” process to amplify gamete production, or just as a chance product of less regulated gene expression.
The downside, though, is that changes that increase sperm production and survival may not be so beneficial in the adult organism, where stability and uniformity are more important. Patterns of gene expression that inhibit apoptosis or promote cell division can enhance cancer cell survival, too.
So we do have hypotheses that explain the creation of useful De Novo proteins
— even better, we have observations, measurements, and experiments that demonstrate where they come from and how they work. @ThatGuyBuster strikes out.
He had to throw out one last comment, demonstrating his delusional vision of how his appearance in an evolutionary biology class would go, and this one is even more ridiculous than his prior fantasies. It’s also derivative and routinely debunked.
I would stand up and say simply that a 100 amino acid protein has 20^100 possible configurations and one works. There is no mechanism that can randomly get the correct length and configuration. And you would stammer and melt down….Creation Myths can’t do it either. I say you apologize to your classes for telling them lies
Oh man, that is so familiar. I think I first heard a version in the 1980s (I feel so old).
Look, there is no one configuration for a protein, nor is there a “correct” length. Would you tell a class full of writing students that the correct length of a paragraph is 100 words, and there is only one correct subject and verb for each sentence? Therefore writing new books is impossible? Nonsense.
I’ll just refer you to Ian Musgrave’s article on Lies, Damned Lies, Statistics, and Probability of Abiogenesis Calculations, in which he dissected the fallacies in creationist statistical reasoning.
That was published in 1998. I wouldn’t melt down if a creationist tried that argument on me…I’d just laugh. I’m confident that Creation Myths would be unfazed by that tired old stupidity, too.
Anyway, that’s it for @ThatGuyBuster. I haven’t blocked him on YouTube, since he’s so predictably silly and wrong, he’ll probably be back with more laughable excuses. I might appreciate him more if he signed up for my Patreon — patreon.com/pzmyers. But only cool people sign up for that, like these named supporters.
Subscribers to my patreon get occasional spider photos and accounts of the shenanigans in my lab, which are unfortunately rare lately while I’m focused on convalescence. I’m getting better, though, as you can see from this short adventure when I wandered into my yard during a midwestern rainstorm. I can walk, sort of! We’ve got monarchs growing all over the place!
Luuk A. Broeils, Jorge Ruiz-Orera, Berend Snel, Norbert Hubner & Sebastiaan van Heesch (2023) Evolution and implications of de novo genes in humans. Nature Ecology & Evolution volume 7, pages804–815 (2023)
On 20 January 2025 – the day of Donald Trump’s second presidential inauguration – US Border Patrol agent David Maland pulled over a car in Vermont, near the Canadian border, in what should have been a routine traffic stop. In the car were Teresa Youngblut and Ophelia Bauckholt, a high-flying quant trader. The pair had just visited a shooting range and were carrying tactical gear. Within minutes, Bauckholt and Maland were dead, and a wounded Youngblut had been detained by police.
To most of the world, reports of the incident seemed bizarre. Violence on the US-Canadian border is rare, as are shootouts involving highly educated financial professionals. But to those who had followed a particular subculture within the “rationalist” movement – now known to the world as “Zizians” – Maland and Bauckholt were just the latest casualties of an escalating spiral of violence.
Rationalism is a philosophical movement with a rich history dating back to Ancient Greece. It came to the fore during the Enlightenment, giving us our modern commitment to reason and logic as primary sources of knowledge. In recent years, a relatively niche but influential online community has also claimed the name “rationalist”. It also emphasises the primacy of logic, and is associated with effective altruism (a movement concerned with maximising the benefit from charitable giving). But an offshoot of the community has twisted this philosophy, taking it to extremes, guided by the writings of Jack LaSota, known by her moniker “Ziz”.
The Zizian grouping has never numbered more than a dozen individuals. Years before the killings began, there were reports that one follower had been driven to suicide by the group’s extreme ideas and practices, including sleep deprivation. After this, two members (including Ziz herself) faked their own deaths, and the group’s closest followers have experimented with alternative living, including on a flotilla of boats and then on trailers.
A series of violent incidents went on to claim the lives of several people in the Zizian orbit, though most cases are ongoing. In 2022, 80-year-old Curtis Lind – the landlord to Ziz and several of the group’s members – was allegedly attacked and ambushed, sustaining more than 50 stab wounds and a blow to the head, as well as being completely run through with a samurai sword. Astonishingly, he survived and shot and killed one of his attackers – only to be murdered on 17 January 2025, three days before the Canadian border standoff, and shortly before he was due to testify on the previous attempt to kill him. Elsewhere, early in 2023, the bodies of Richard and Rita Zajko, the parents of another Zizian, were discovered by police in their home, both with gunshot wounds. Their daughter was named as a person of interest in their killings.
At the time of writing, one small spinoff of the online “rationalist” community has been connected to at least seven deaths. How does a movement grounded in logic and altruism misfire so badly? The question may have significance beyond the Zizians: while the motives of Luigi Mangione, the alleged murderer of UnitedHealthcare CEO Brian Thompson, remain unclear, he too was affiliated with the online rationalist movement.
The gateway into the online rationalist world is often effective altruism, which is grounded in the genuinely reasonable idea that those who donate to charity should get the most bang for their buck. While measuring the effectiveness of charitable giving is complex, the aim is uncontroversial, firmly grounded in the logic of utilitarianism. But the rationalist community took it further, looking into the future: if we are interested in maximising the potential good we can do in the world, why would we only focus on people alive today? In the future, there could be trillions of people – meaning, from a purely numerical perspective, that making sure that those people can be born (and have good lives) outweighs any good we can do for the current population.
This approach might appear to be rational, given that it follows several logical steps. But these concepts are subject to overreach. The focus of the movement shifted to existential concerns around humanity’s survival, such as multi-planet living (so humanity could survive the end of Earth), and artificial intelligence – both to ensure it doesn’t wipe out humanity once it emerges, but also to make sure it does emerge, because of a belief in its massive potential to fix our societal issues.
Perhaps unsurprisingly, many of those in the movement work in or around big tech. Ziz had been offered a job by Google (which she never took up), and became fixated on the emergence of superintelligent AIs as she read herself into the rationalist movement. That included a piece of rationalist Harry Potter fan fiction, Harry Potter and the Methods of Rationality, written by the controversial AI theorist Eliezer Yudkowsky, which deconstructs the contradictions of the wizarding world. Popular blogs like LessWrong and Astral Codex Ten imagined what benevolent but super-powerful AIs might be able to achieve – from solving the problems of modern medicine to making it possible for humans to upload their consciousness.
Ziz seems to have won her supporters through the strength of her blog’s writing and the esoteric arguments it made – and then, it seems, through sheer strength of
personality once they got to know each other in the real world. Much of her writing focuses on the idea of “timeless decisions”, which bears some resemblance to Kant’s categorical imperative. While Timeless Decision Theory differs from the Kantian perspective, both emphasise the importance of universalisability, and consider if the outcome would be positive if everyone were to make the same choice, regardless of context or individual circumstance.
For most in the rationalist movement, these ideas were merely thought experiments, useful for challenging assumptions and honing their logical skills. Some took a darker turn, for example when people started to consider what an AI might do to maximise the overall “good” for humanity. Perhaps it might punish people who tried to oppose its efforts, or even those who didn’t do enough to help create it sooner. But when Ziz and her associates were still welcome at mainstream rationalist gatherings, they started gaining a reputation for taking these ideas literally, and applying them in their own lives.
Many of those involved in the movement believe that killing animals for food is logically indefensible. But Ziz has made statements to suggest that she sees eating meat as not just immoral, but akin to murder. She is a militant vegan, as are most of her followers. Taking this further, she believes that any super-intelligent AI we might develop would agree, and would punish those who ate meat.
Marginalisation, isolation and oppression can make people more vulnerable to the appeal of cults, and this may have affected the Zizians in various ways. For example, most members of the group are trans, and felt excluded somewhat from the rationalist mainstream because of that (though the community has accepted several trans influencers). Beliefs like militant veganism, embodied in practice, appear to have further isolated the group from the mainstream movement, and society in general, setting up an outsider dynamic.
Io Dodds, a journalist who considers herself adjacent to the rationalist movement, and who is trans herself, says these group dynamics are important considerations when looking at the violent radicalisation of Ziz and those around her. Dodds described her own intense experience of discovering the community’s ideas. “I remember the feeling of like delving and digging, and the kind of vertigo of it, the compulsion,” she said. “It’s a very intense, emotional process, like being called forward and forward and deeper and deeper and deeper into a set of bizarre ideas, like understanding was just around the corner … It felt so important. You know, it felt like the world was at stake.”
For Dodds, that need to dedicate her life to the cause ebbed as her gender transition progressed and she was able to ground herself in other ways. But the Zizians’ isolation would have added to the intensity – not least because it left them financially vulnerable. Unable to pay for housing around the Bay Area, for example, they ended up living in close quarters together in houseboats and then in trailer trucks.
But how far was it the ideas themselves, rather than the particular circumstances of the group, that produced the intense and eventually deadly culture of the Zizians? The rationalist community actively encourages people to break taboos, push thought experiments to their extremes, and more generally challenge their heuristics (social norms, or basic rules for living). This is fine as an intellectual exercise – it is an essential part of obtaining a degree in philosophy, for example – but can be damaging if it cuts through into everyday life. Dodds speaks of the “little alarm” most of us have if, after following apparently logical steps, we arrive at a conclusion that is morally unacceptable – such as that someone no longer qualifies as a person because of their moral deficiencies – which would stop us pursuing that chain of thought. It seems that some Zizians may have learned to silence that alarm – meaning that when life circumstances pushed them into situations where violence seemed like the right answer, many of their internal moral safeguards failed.
We might dismiss the Zizians as a bizarre phenomenon involving a handful of people who largely knew one another and lived in close proximity, and so deem them irrelevant to the wider rationalist movement. But the influence of this online community on others is apparent, too – not least Luigi Mangione, who is awaiting trial for what might be the most high-profile US assassination of the 21st century to date.
Mangione’s online activity shows he shared a fascination with artificial intelligence and what it meant for decision theory. He followed accounts connected to the community, and was a contributor to another subgroup of online rationalists known as TPOT, or “This Part Of Twitter”. He does not fit in with conventional US political tribes – on his 27th birthday, which he celebrated in prison, he released a list of 27 things for which he is grateful, which included both “liberals” and “conservatives”.
Like many within the community, Mangione is a gifted mathematician who worked in tech. He posted about “the singularity” – a hypothetical breakthrough in which an AI will be capable of improving itself repeatedly and iteratively, so as to reach super intelligence at a stroke – and its consequences on society. A note found in his possession when he was arrested set out logical reasonings for the apparent crime, including his hopes that it would bring public attention to the greed of the health insurance industry and influence the decision making of executives and investors alike. Mangione has pleaded not guilty, so his full actions and motives may never truly be revealed, but the killing can be seen as the result of moral reasoning, taken too far.
Stepping back, what does this mean for this small but apparently influential rationalist subculture? And how might the broader movement respond – a movement that believes in secular reason and its ability to improve the world? These are isolated incidents, and there is certainly no equivalence between these criminal acts and religious violence across the globe. Some may even look at it as little more than individual cases of radicalisation, which is possible in any group.
But there are aspects of this distorted version of rationalism that seem to be dangerous in themselves. There is the idea of “decoupling”, of judging moral claims in the abstract, removed from their social context. Followers are encouraged to push against cultural taboos, ignoring their own gut reactions. Elaborate thought experiments are applied to debates on race and IQ, eugenics, disability and other sensitive topics in ways that seem distasteful or outright abhorrent to those outside the movement.
The community as a whole cannot entirely absolve itself from the consequences of encouraging people to overcome natural heuristics against certain patterns of thought – or against violence and criminality. This way of thinking – these extreme thought experiments – might be fine for the vast majority of people, but that still leaves a minority who could go on to cause harm because of it.
If this way of thinking is dangerous, we might ask how it emerged. The idea of committing “rational” murder is hardly a new one. Numerous perpetrators of assassinations have sought to justify their crime through the idea of weighing up the life of one individual, the victim, against the many lives predicted to be saved as a consequence of that person’s death. Dodds recalls the case of the German nationalist Friedrich Staps, an 18-year-old who tried to assassinate Napoleon to “render the highest service to my country and to Europe”. Napoleon, admiring Staps’s bravery and patriotism, if not his goals, tried to grant him a pardon. (Staps refused, saying that if he went free, he would only give it another go.) We might see the same kind of admiration in the public’s reaction to Mangione, with some regarding him as a folk hero, who has sacrificed his freedom for a noble cause.
But there is another, more novel idea, which is worth extra attention – and that’s superintelligent AI. Many of the movement’s thought experiments are less interesting than they first appear: utilitarianism long ago considered the matter of countless future lives (and thus utility) and addressed it in their mathematical models. These ideas can be found in mainstream economics, and don’t justify the movement’s fixation on AI. It is as if their worldview is working backwards from its intended conclusion, which is actually more akin to supernatural beliefs.
There is evidence to suggest that artificial intelligence is advancing exponentially, but not to justify the beliefs common to the online rationalist movement. Many of its followers believe that we will someday be able to upload ourselves into the cloud – essentially creating digital heavens and hells (where the AI may one day reward or punish us). We can transcend our bodies and extend our lives – promises that previously belonged firmly in the realm of religion. There is no hard data to suggest that this “singularity” moment will occur. It is a speculative belief, essentially in a digital god that we are building ourselves.
Such beliefs are worryingly widespread in the tech sector, and seem to have found an expression in the rationalist movement. For someone absorbed in the movement, they would have a seismic impact on their thinking, morality and priorities. It’s doubtful whether this community should be calling themselves rationalists, let alone claiming to represent the modern iteration of this centuries-long philosophical movement. They might be better described as belonging to a technological cult – one that, through its links to Silicon Valley, has an outsized influenced on our world.
With many of the Zizians, including LaSota, facing charges, the group may be losing steam. But perhaps the biggest surprise is that these extreme views haven’t caused more social upheaval – or at least, they haven’t yet.
This article is from New Humanist's Autumn 2025 issue. Subscribe now.
Drone, c. 4000 BCE: male bee
It’s ironic that what has become the most modern form of warfare owes its name to some of the earliest pieces of writing in English. At that time, as now, it was the word for a male bee. Given that similar words exist in many other languages for the insect and the noise it makes, the suggestion is that the noun, and its verb “to drone”, has been around for a very long time, as far back as the Indo-European language spoken some 6000 years ago.
It seems to be very susceptible to metaphor and simile. In the Anglo-Saxon Chronicle (1125), the author compared the actions of an acquaintance to the voracious behaviour of the drone, eating everything that it pulls into the hive. By 1400, in the midst of the Hundred Years’ War, the drone had become a metaphor for someone who grows fat from the work of others. The first poet laureate John Skelton used the word to make disparaging comments about the Scots: “The rude rank Scottes, lyke dronken dranes” (c.1529). This theme runs through the centuries, even into official documents. The Registration for Employment Order, published after the Second World War, compiled a list of citizens the government wished to get back into work, describing them poetically as “spivs, drones, eels and butterflies”.
The first recorded mention of “drone” to describe some kind of remote-controlled aircraft is in the US in 1936. Since then, a wide variety of pilotless flying devices have been devised. They can be used for creative and recreational purposes. But they are also increasingly being deployed for reconnaissance and lethal attack – at no risk to the operators.
When we recall Wernher von Braun’s long-range guided missiles, used in the Second World War, this form of warfare from afar is nothing new. But the terrifying destructive precision of the modern drone stands in stark contrast to the benevolent honeybee. Or perhaps it is not so different – we might think of the bee hovering over his flower of choice, before diving in and achieving his goal.
This article is from New Humanist's Autumn 2025 issue. Subscribe now.
Mark Hilborne leads the space security group at King’s College London. He is co-editor of the book “War 4.0”.
Why is Russia-Ukraine being called a “space war”?
Space has a number of military functions that go back to the Cold War. In the Gulf War, [the use of space] became a bit more tactical. But space is now supporting virtually everything the Ukrainians are doing. Satellites allow the high command to communicate, but they also allow Ukraine to pilot its UAVs (unmanned aerial vehicles, or drones) and its USVs (uncrewed surface vehicles), which have been responsible for all those very dramatic attacks, both on Russia’s bombers and on their Black Sea Fleet. Despite Ukraine not actually owning any infrastructure in space, it can do all this using Starlink [the satellite constellation operated by Elon Musk’s company SpaceX.]
Connected to that, there’s imagery. The Ukrainians are using very clever platforms – they can take video streams from drones, satellite imagery, and open source data like from your mobile phone, and they can compile all this information, and redraw the tactical map in about 30 seconds. Find where Russian targets are, and then find the closest firing solution. So these capabilities match the Russians. With a much smaller military force, they’re able to level the playing field.
It’s worth noting that almost the first strike in the war was Russia conducting a cyber-attack on all the infrastructure that supported Ukraine’s older satellite communications. Ukraine was no longer able to communicate. That was on day one. So space is of primary importance in this particular conflict.
What about the involvement of private companies? Does Elon Musk’s SpaceX have too much influence?
The commercial sector in space has brought a lot of innovation and cost reductions extremely quickly, in a way that state projects would never have managed to do. At the moment, Musk has such a big chunk of space, and he’s a highly volatile individual. But that will change, as more companies get involved. So the situation will become more balanced.
Of course, commercial companies have been involved in warfare for a long time. Primarily defence contractors like Lockheed Martin or, say, private companies building satellites for the US government. But now we have commercial companies working in a commercial cycle. There is information flowing from US intelligence agencies to Ukraine, but there’s also all the services that we just discussed, coming to the Ukrainians commercially without any state intervention. Some of it is goodwill, provided for free, but some of it isn’t. Ukraine’s Starlink bill, for example, is being covered by the Polish government.
What are counter-space weapons?
Firstly, there’s kinetic weapons, which explode or hit things. There’s direct ascent anti-satellite missiles, which launch from Earth’s surface. And a number of countries have tested those. We know Russia has them. But it creates a lot of debris, flying around at 17,000 miles an hour – it could take out your own satellites as well. And while those systems are good for taking out a single satellite, when we’re talking about Starlink, that’s thousands of them. There are also co-orbital weapons, where one satellite sneaks up on another and bashes into it.
Then there’s remote rendezvous and proximity operations (RPO) where a satellite might hover for a long time a bit too close to another one, and it’s difficult to know what they’re up to. Listening? Just trying to irritate you? There was a case in 2020, where Russia had a satellite sitting very close to a US spy satellite. Then it spat out a second, smaller one, like James Bond, and that spat out something that looked like a projectile. But that’s the trouble with space. It’s very hard to see with accuracy.
Weapons can also be used to just knock out data streams. So we have things like “dazzling”, where you blind the sensors of a satellite. You have jamming of GPS signals, or manipulation of data. Russia did this to ships in the Black Sea in 2017, so the GPS systems started telling all the ships slightly wrong information, sending them off course.
What are the implications of jamming?
Many, many functions in modern life rely on GPS systems – like the bank and the stock market. Even if you want your pizza delivered, or you want to drive somewhere. That’s been disrupted locally many, many times in the Russia–Ukraine conflict. What happens if these systems are disrupted in a significant way? What’s the backup? The economic impact if all GPS is jammed is calculated to be at least a billion [pounds] per day in the UK alone.
It seems that nations are testing space warfare capabilities, but are wary of deploying them?
All the big players have these latent counter-space weapons capacities, although the US made a statement in 2022 that they will cease testing direct ascent anti-satellite missiles. So that’s an attempt to stabilise the situation. Increasingly, though, as tensions seem to rise, we find most nations starting to talk about space as a domain of warfare.
Then there’s President Trump’s talk of a Golden Dome, and there are real risks to that. Even though that would be designed to be used against missiles, it would introduce interceptors in space [since intercontinental ballistic missiles travel through space]. And if you can hit a missile, you can hit a satellite. There’s very little way of distinguishing whether a space weapon is defensive or offensive. Critics are saying the Golden Dome could take a decade, maybe two decades, and cost $1 trillion or maybe even $2 trillion – so it won’t happen in Trump’s time, if it does happen at all. But there will be movement in that direction – even if it doesn’t happen at the scale imagined.
How can we discourage space warfare?
The Outer Space Treaty states that space must be maintained for peaceful purposes, but lawyers differ in their interpretation of what that actually means. A more binding arms control treaty in space has been a goal since the 1980s, but these discussions just go round and round. For example, how do we define what is a weapon in space?
More hopefully, the UN has adopted a UK proposal that is the basis for the new Responsible Behaviours in Space initiative – to try to agree on norms of behaviour. For example, it’s considered irresponsible if a satellite is closer than a certain distance to another. This could become the basis of soft law, which hopefully then becomes the basis of binding law. But there’s a lack of trust between the major players and space is a very sensitive environment. We’ve talked about Russia, but probably the biggest competitor is China, which has a very opaque way of doing business. They have a broader policy of what they call “military civil fusion”, where you can never really distinguish between a commercial or civilian or military system. The US is particularly worried about China.
Would you say that the lack of public awareness around warfare in space is a democratic issue?
The public doesn’t understand how much we rely on space as a domain of warfare. But it’s not just the general public. The UK government only really talked about space as a strategic area of warfare in the Integrated Review [a comprehensive articulation of the UK’s national security and international policy] in 2020. In the United States it’s better understood. When we talk about our domains of war the three traditional ones are air, land and sea. Then we have cyber. Space, like cyber, is kind of “out of sight, out of mind”. But the UK is an interesting country because of this strong diplomatic initiative in the UN. There’s also a lot of entrepreneurial thinking about space. But I would agree it’s a democratic issue. We need more awareness.
This article is from New Humanist's Autumn 2025 issue. Subscribe now.
Donald Trump is suing his niece, Mary, for $100 million for helping the New York Times with its investigation into his finances. “Good luck with that,” she said to enthusiastic applause when I interviewed her at the Hay Festival in May. A few months into the US president’s chaotic and disruptive second term, I found her calm remarkable.
Mary Trump, daughter of the president’s late elder brother Fred, has a PhD in clinical psychology. In her latest memoir about growing up in the Trump dynasty she writes scathingly about her uncle who, she claims, was always cruel and selfish. Reflecting on his return to power she told me: “One of the reasons I took 2016 personally is because it felt like millions had voted to turn America into my family – which is a terrible idea.”
Now entirely estranged from the Trump family and its patriarchal dysfunction, Mary’s bravery in publicly mocking the most powerful man on the planet has seen her become a figurehead. But not of any recognisable movement. Rather, her opposition is a literal example of that old feminist mantra: “the personal is political.”
The phrase was coined as a title for activist Carol Hanisch’s 1969 essay, assessing the emerging challenges for the Second Wave women’s liberation movement. Key was rejecting the normalisation of male supremacy: “Women are messed over, not messed up! We need to change the objective conditions, not adjust to them,” she wrote. How to change the objective conditions remains the challenge.
Two other women at the Hay Festival this year had their own approaches. Laura Bates is best known for founding the Everyday Sexism Project in 2012, which records crowdsourced experiences of daily sexism. She has also published carefully researched books on misogyny and the horrifying potential of cyber abuse through AI – “the new age of sexism”, as she calls it.
However, Bates also seeks to inspire joy in a generation of young girls with her series of young adult novels about a community of Arthurian female knights. Even these fantasy novels are fact-based (the knights, not King Arthur). Bates told me of historic records describing “a group of unruly and rowdy women, who turned up at jousting tournaments on horseback, without the right equipment and refused to go away unless they were allowed to join in”.
If Mary Trump, born in 1965, and Laura Bates, born in 1986, represented Gen X and millennial thinking around gender, then British actress Alison Steadman – born 1946 – was the festival’s feminist baby boomer. She is loved for her long career from Pride and Prejudice to Gavin and Stacey, and her often darkly comic performances (she was the original Beverley in Mike Leigh’s Abigail’s Party). Steadman was there to talk about her new memoir – a book that contains plenty of joy but also, unexpectedly, a powerful frankness about her encounters with predatory men.
Steadman had never spoken about most of the near escapes before. The first was as a teenager, when a schoolteacher offered her a lift after drama club, only to try to force himself on her, laughing as he warned that no one would believe her word over his. That he could kill her and dump her body and nobody would know. She managed to stop him, but her anxiety about why he’d targeted her was reinforced when she found herself trapped with such a man again, years later. I recognised that instinct of anger mixed with guilt. The dread that grew as our female social training to be polite turned to fear.
By choosing to open up about those traumatic memories from the past, Steadman was doing a service to the present. She reminded us how, though abusers still prey on the young this way, the “objective conditions” have changed. A girl would be much more likely to be believed today, and to have help and support to challenge such a man. The battle is ongoing, but as these three women show, there is power in knowing that we are not alone.
This article is from New Humanist's Autumn 2025 issue. Subscribe now.
I am going to perform a new show at the 2025 Edinburgh Fringe Festival called ‘The World’s Most Boring Card Trick.’
The show has its roots in the notion that creativity often comes from doing the opposite to everyone else. Almost every Festival performer strives to create an interesting show, and so I have put together a genuinely dull offering. Because of this, I am urging people not to attend and telling anyone who does turn up that they can leave at any point. Everyone who displays extraordinary willpower and stays until the end receives a rare certificate of completion. Do you have what it takes to endure the show and earn your certificate?
I have identified the seven steps that are central to almost every card trick and devised the most boring version of each stage. Along the way, we explore the psychology of boredom, examine why it is such a most powerful emotion, and discover how our need for constant stimulation is destroying the world.
It will seem like the longest show on the Fringe and success is an empty auditorium. There will only be one performance of the show (20th August, 15.25 in the Voodoo Rooms). If you want to put your willpower to the test and experience a highly unusual show, please come along!
As ever with my Fringe shows, it’s part of PBH’s wonderful Free Fringe, and so seats are allocated on a ‘first come, first served’ basis. I look forward to seeing you there and us facing boredom together.
Around 2010, psychologists started to think about how some of their findings might be the result of several problems with their methods, such as not publishing experiments with chance results, failing to report all of their data, carrying out multiple analyses, and so on.
To help minimise these problems, in 2013 it was proposed that researchers submit their plans for an experiment to a journal before they collected any data. This became known as a Registered Report and it’s a good idea.
At the time, most psychologists didn’t realise that parapsychology was ahead of the curve. In the mid 1970s, two parapsychologists (Martin Johnson and Sybo Schouten from the University of Utrecht) were concerned about the same issues, and ran the same type of scheme for over 15 years in the European Journal of Parapsychology!
I recently teamed up with Professors Caroline Watt and Diana Kornbrot to examine the impact of this early scheme. We discovered that around 28% of unregistered papers in the European Journal of Parapsychology reported positive effects compared to just 8% of registered papers – quite a difference! You can read the full paper here.
On Tuesday I spoke about this work at a University of Hertfordshire conference on Open Research (thanks to Han Newman for the photo). Congratulations to my colleagues (Mike Page, George Georgiou, and Shazia Akhtar) for organising such a great event.
As part of the talk, I wanted to show a photograph of Martin Johnson, but struggled to find one. After several emails to parapsychologists across the world, my colleague Eberhard Bauer (University of Freiburg) found a great photograph of Martin from a meeting of the Parapsychological Association in 1968!
It was nice to finally give Martin the credit he deserves and to put a face to the name. For those of you who are into parapsychology, please let me know if you can identify any of the other faces in the photo!
Oh, and we have also recently written an article about how parapsychology was ahead of mainstream psychology in other areas (including eyewitness testimony) in The Psychologist here.
I am a big fan of magic history and a few years ago I started to investigate the life and work of a remarkable Scottish conjurer called Harry Marvello.
Harry enjoyed an amazing career during the early nineteen hundreds. He staged pioneering seaside shows in Edinburgh’s Portobello and even built a theatre on the promenade (the building survives and now houses a great amusement arcade). Harry then toured Britain with an act called The Silver Hat, which involved him producing a seemingly endless stream of objects from an empty top hat.
The act relied on a novel principle that has since been used by lots of famous magicians. I recently arranged for one of Harry’s old Silver Hat posters to be restored and it looks stunning.
The Porty Light Box is a wonderful art space based in Portobello. Housed in a classic British telephone box, it hosts exhibitions and even lights up at night to illuminate the images. Last week I gave a talk on Harry at Portobello Library and The Porty Light Box staged a special exhibition based around the Silver Hat poster.
Here are some images of the poster the Porty Light Box. Enjoy! The project was a fun way of getting magic history out there and hopefully it will encourage others to create similar events in their own local communities.
My thanks to Peter Lane for the lovely Marvello image and poster, Stephen Wheatley from Porty Light Box for designing and creating the installation, Portobello Library for inviting me to speak and Mark Noble for being a great custodian of Marvello’s old theatre and hotel (he has done wonderful work restoring an historic part of The Tower).
I have recently carried out some detective work into one of my favourite paranormal studies.
It all began with an article that I co-wrote for The Psychologist about how research into the paranormal is sometimes ahead of psychology. In the article, we describe a groundbreaking study into eyewitness testimony that was conducted in the late 1880s by a paranormal researcher and magician named S. J. Davey (1887).
This work involved inviting people to fake séances and then asking them to describe their experience. Davey showed that these accounts were often riddled with errors and so couldn’t be trusted. Modern-day researchers still cite this pioneering work (e.g., Tompkins, 2019) and it was the springboard for my own studies in the area (Wiseman et al., 1999, 2003).
Three years after conducting his study, Davey died from typhoid fever aged just 27. Despite the pioneering and influential nature of Davey’s work, surprisingly little is known about his tragically short life or appearance. I thought that that was a shame and so decided to find out more about Davey.
I started by searching several academic and magic databases but discovered nothing. However, Censuses from 1871 and 1881 proved more informative. His full name was Samuel John Davey, he was born in Bayswater in 1864, and his father was called Samuel Davey. His father published two books, one of which is a huge reference text for autograph collectors that runs to over 400 pages.
I managed to get hold of a copy and discovered that it contained an In Memoriam account of Samuel John Davey’s personality, interests, and life. Perhaps most important of all, it also had a wonderful woodcut of the man himself along with his signature.
I also discovered that Davey was buried in St John the Evangelist in Shirley. I contacted the church, and they kindly send me a photograph of his grave.
Researchers always stand on the shoulders of previous generations and I think it’s important that we celebrate those who conducted this work. Over one hundred years ago, Davey carried out a pioneering study that still inspires modern-day psychologists. Unfortunately, he had become lost to time. Now, we know more information about him and can finally put a face to the name.
I have written up the entire episode, with lots more information, in the latest volume of the Journal of the Society for Psychical Research. If anyone has more details about Davey then please feel free to contact me!
My thanks to Caroline Watt, David Britland, Wendy Wall and Anne Goulden for their assistance.
References
Davey, S. J. (1887). The possibilities of malobservation, &c., from a practical point of view. JSPR, 36(3), 8-44.
Tompkins, M. L. (2019). The spectacle of illusion: Magic, the paranormal & the complicity of the mind. Thames & Hudson.
Wiseman, R., Jeffreys, C., Smith, M. & Nyman, A. (1999). The psychology of the seance, from experiment to drama. Skeptical Inquirer, 23(2), 30-32.
Wiseman, R., Greening, E., & Smith, M. (2003). Belief in the paranormal and suggestion in the seance room. British Journal of Psychology, 94(3), 285-297.
Delighted to say that tonight at 8pm I will be presenting a 60 min BBC Radio 4 programme on mind magic, focusing on the amazing David Berglas. After being broadcast, it will be available on BBC Sounds.
Here is the full description:
Join psychologist and magician Professor Richard Wiseman on a journey into the strange world of mentalism or mind magic, and meet a group of entertainers who produce the seemingly impossible on demand. Discover “The Amazing” Joseph Dunninger, Britain’s Maurice Fogel (“the World’s Greatest Mind Reader”), husband and wife telepathic duo The Piddingtons, and the self-styled “Psycho-Magician”, Chan Canasta.
These entertainers all set the scene for one man who redefined the genre – the extraordinary David Berglas. This International Man of Mystery astonished the world with incredible stunts – hurtling blindfold down the famous Cresta toboggan run in Switzerland, levitating a table on the streets of Nairobi, and making a piano vanish before hundreds of live concert goers. Berglas was a pioneer of mass media magic, constantly appeared on the BBC radio and TV, captivated audiences the world over and inspired many modern-day marvels, including Derren Brown.
For six decades, Berglas entertained audiences worldwide on stage and television, mentoring hundreds of young acts and helping to establish mentalism or mind magic as one of the most popular forms of magical entertainment, helping to inspire the likes of Derren Brown, Dynamo and David Blaine. The originator of dozens of illusions still performed by celebrated performers worldwide, Berglas is renowned for his version of a classic illusion known as Any Card at Any Number or ACAAN, regarded by many as the ‘holy grail’ of magic tricks and something that still defies explanation.
With the help of some recently unearthed archive recordings, Richard Wiseman, a member of the Inner Magic Circle, and a friend of David Berglas, explores the surreal history of mentalism, its enduring popularity and the life and legacy of the man many regard as Master of the Impossible.
Featuring interviews with Andy Nyman, Derren Brown, Stephen Frayne, Laura London, Teller, Chris Woodward, Martin T Hart and Marvin Berglas.
Image credit: Peter Dyer Photographs