A great sign appeared in heaven: a woman clothed with the sun, with the moon under her feet and a crown of twelve stars on her head. 2 She was pregnant and cried out in pain as she was about to give birth. 3 Then another sign appeared in heaven: an enormous red dragon with seven heads and ten horns and seven crowns on its heads. 4 Its tail swept a third of the stars out of the sky and flung them to the earth. The dragon stood in front of the woman who was about to give birth, so that it might devour her child the moment he was born. 5 She gave birth to a son, a male child, who “will rule all the nations with an iron scepter.”[a] And her child was snatched up to God and to his throne. 6 The woman fled into the wilderness to a place prepared for her by God, where she might be taken care of for 1,260 days.
7 Then war broke out in heaven. Michael and his angels fought against the dragon, and the dragon and his angels fought back. 8 But he was not strong enough, and they lost their place in heaven. 9 The great dragon was hurled down—that ancient serpent called the devil, or Satan, who leads the whole world astray. He was hurled to the earth, and his angels with him.
See? War in heaven, Satan cast down to earth. But why 23 September 2017? The Bible doesn’t say that! We have to go to another source: astrology.
All the counsel you have received has only worn you out. Let your astrologers come forward, those stargazers who make predictions month by month, let them save you from what is coming upon you. Surely they are like stubble; the fire will burn them up. They cannot even save themselves from the flame… Each of them goes on in his error; there is not one that can save you.
But you need astrology to explain all those strange references to a pregnant woman with stars on her head, a dragon, and signs in heaven. According to some, these are references to constellations (‘ware that link — it’s a manic YouTube video by a loon babbling a mile a minute). The woman is Virgo; the moon is at her feet on that date; the constellation Leo with 9 stars is above her head; Jupiter is passing through her belly, so she’s giving birth to Jupiter. The International Space Station is also passing by, which is supposedly significant, but I couldn’t bear to listen to the video any more to figure out why.
The September rapture date came from a Christian researcher named David Meade who calculated it would occur 33 days after last month’s eclipse, The Washington Post reported.
Jesus lived for 33 years. The name Elohim, which is the name of God to the Jews, was mentioned 33 times [in the Bible], Meade told the newspaper. It’s a very biblically significant, numerologically significant number. I’m talking astronomy. I’m talking the Bible … and merging the two.
Another factor is Nibiru. Nibiru is a wandering planet in our solar system that the aliens of Zeta Reticuli explained to a human alien contactee through the implant they put in her head. It’s also based on the ravings of ancient astronaut fanatic, Zacharia Sitchin. Anyway, they’re saying Nibiru is going to smack into the earth in a couple of days.
So now you know why people think the world will end on Saturday. The evidence is a series of stretched metaphors from the trippiest chapter of the Bible; astrological alignments; the ravings of a saucer kook; a story from an ancient aliens conspiracy theorist; and numerology. I think you are capable of evaluating the claim from the quality of the evidence, so I’ll leave you to decide whether you need to start preparing for doomsday.
Patrik Hermannson is a young Swedish man who went undercover to explore the American alt-right movement. He works with a group called Hope Not Hate, and they’re working on a movie, My Year in Kekistan.
It doesn’t sound like he had a good time. I also hope he’s now taking precautions — he was dealing with dangerous, horrible people, and they’re not going to be happy about being exposed. He’s got video of these people saying vile things and revealing their true plans. And now they’re getting written up in the New York Times.
Mr. Hermansson and Mr. Jorjani met at an Irish pub near the Empire State Building, where the baby-faced Mr. Jorjani imagined a near future in which, thanks to liberal complacency over the migration crisis, Europe re-embraces fascism: “We will have a Europe, in 2050, where the bank notes have Adolf Hitler, Napoleon Bonaparte, Alexander the Great. And Hitler will be seen like that: like Napoleon, like Alexander, not like some weird monster who is unique in his own category — no, he is just going to be seen as a great European leader.”
More shockingly, Mr. Jorjani bragged about his contacts in the American government. “We had connections in the Trump administration — we were going to do things!” he said at one point. “I had contacts with the Trump administration,” he said at another.
His connections, fortunately, seem to have been indirect and tangential, but it does reveal the grandiose delusions of importance these people have. Another guy he met with was always wearing a Hitler Youth-style outfit. They are backwards-looking dipshits, but don’t underestimate them.
This Jorjani fellow, though…I’d recently run across that name in the Chronicle of Higher Ed as the subject of criticism.
We especially write in response to news reports that have identified Iranian-American Jason Reza Jorjani, who received his Ph.D. in philosophy from Stony Brook University, as one of the co-founders of the white nationalist website altright.com and a member of its board of directors. It is clear to us that Jorjani uses his training in higher education to promote a controversial cultural and historical platform that connects Iranianness with Aryanness. Unfortunately, Jorjani’s position has a long-standing grip in our communities. This belief is animated by claims made by 19th century philologists about linguistic affiliations between Persian and European languages, as well as the narratives of the Avesta and the Gathas, which describe Aryans as a group of ethnically distinct people settling in the Iranian plateau.
Speaking of delusional…I don’t think an Iranian is going to be very popular among American hate groups. He can protest all he wants about 19th century philosophers classifying his people, as well as the Indians of South Asia, as belonging to the fictitious category of the “Aryans”, but these haters aren’t sophisticated enough to make that distinction. Brown and foreign is all they’re going to see.
So how are they going to get Adolf’s picture on our currency? Simple. Undermine people’s trust in the system, and radicalize the youth. Promote people who lean their way. Shuffle the gullible off farther and farther to the right (yeah, if you’re on /pol or r/theDonald, are flaunting Pepe memes and think torch-lit marches with white nationalists are cool, you’re just a gullible fool, a sheep following a goat).
The extreme alt-right are benefiting immensely from the energy being produced by a more moderate — but still far-right — faction known as the “alt-light.”
The alt-light promotes a slightly softer set of messages. Its figures — such as Milo Yiannopoulos, Paul Joseph Watson and Mike Cernovich — generally frame their work as part of an effort to defend “the West” or “Western culture” against supposed left-liberal dominance, rather than making explicitly racist appeals. Many of them, in fact, have renounced explicit racism and anti-Semitism, though they will creep up to the line of explicitly racist speech, especially when Islam and immigration are concerned.
This apparent moderation partly explains why they tend to have much bigger online audiences than even the most important alt-right figures — and why Hope Not Hate describes them as “less extreme, more dangerous.” Alt-light sites like Breitbart, formerly home to Mr. Yiannopoulos, as well as Prison Planet, where Mr. Watson is editor at large, draw millions of readers and are key nodes in a hyperkinetic network that is endlessly broadcasting viral-friendly far-right news, rumors and incitement.
Wait. Yiannopoulos and Watson and Cernovich are light messengers of fascism? They always sound insanely regressive and rotten to me. Intellectual light-weights, maybe, but they spread a terribly vile message. Shying away from using the N-word while still advocating for oppression, deportation, and exploitation isn’t much of a softening.
If we accept this hypothesis of media being used to gradually radicalize people (which I do), it’s unfortunate that there isn’t more mention of YouTube. There’s a bit, but in my experience, YouTube has been an important potentiator of alt-right lies and arrogance.
This goal of mainstreaming is an abiding fixation of the far right, whose members are well aware of the problems their movement has had with attracting young people in recent decades. At one point in Mr. Hermansson’s footage, Colin Robertson, a far-right YouTube personality who goes by the name Millennial Woes, explained to an older extremist the importance of putting forward a friendly, accessible face: “If we don’t appear like angry misfits, then we will end up making friendships with people who don’t agree with us,” he said.
There are people with the confidence to make videos openly endorsing anti-feminism and anti-immigration sentiments, but even more chilling, there are hordes of hateful losers who turn the comment sections of virtually every video into a churning mess of misogyny and racism. There’s the easy on-ramp to alt-right radicalism. It’s a slippery slope well-greased with pictures of Pepe the Frog and kekistani flags.
Milo Yiannopoulos, desperate to gather together the tattered shreds of his relevance, announced this past summer that there would be a “four day extravaganza” on the Berkeley campus that he called “Free Speech Week”. There was a preliminary list of potential speakers, including Ann Coulter, Charles Murray, James Damore, Mike Cernovich, Stephen Bannon, etc., which indicated that they were planning a total shit-show of horrible people, which certainly would test the limits of free speech. It turned out, though, they hadn’t bothered to ask most of those people, and the prospective speakers were a bit surprised to learn of it. Milo claimed to have $12 million in backing.
From the get-go, however, there have been various problems and unanswered questions, starting with the student group that was actually supposed to host “Free Speech Week.” This group, called the Berkeley Patriot, didn’t exist at all before July. Its site has five blog posts, its Facebook page shows no signs of real community and its Twitter account has 16 followers and no tweets. Both the blog and the Facebook page were started on Aug. 25 — shortly after Yiannopoulos announced he was working with this group to stage a major event on the Berkeley campus.
Despite being a tiny organization with no visible history, Berkeley Patriot had a huge ask: It not only wanted to hold events in the usual rooms offered at no charge for student events, but also wanted to rent Zellerbach Hall and Wheeler Auditorium, two of the largest venues on campus. The former of those, for instance, seats around 2,000 people and is mostly used for concerts and major performing arts events. According to the university, Berkeley Patriot was given three deadlines — Aug. 18, Aug. 25 and, finally, Sept. 15 — to sign a contract and pay the $65,000 rental fee for the two auditoriums. The students failed to do that.
Huh. Imagine that.
There is a problem lurking here with the student groups. Students get a real deal on these events: students can book any room on campus, complete with audio-visual gear, seating appropriate for 20 students to 400 students (we’re a small college, so we don’t have those 2000 seat auditoriums) at no charge. What it means is that a conservative with lots of cash can astro-turf a “student group” into existence by finding one or a few compliant students and getting them to host what is essentially a non-student event that is nominally student driven. It’s possible because universities are diverse, and there will always be far right wing students in attendance to provide an entry point. The Morris North Star, the ghastly ultra-right student paper that was here at my university for a couple of years, was a case in point: there was no organic drive to support it, it was managed by just a few students, and it got external money thrown at it…and it fell apart as soon as a few students graduated and the money bags didn’t get delivered anymore.
Milo Yiannopoulos, by the way, is a college dropout who has no connection at all to Berkeley. He’s the very definition of an outside agitator taking advantage of loopholes in college administration.
But it turns out that they — Milo and the students — were either incompetent or had a sneakier plan in mind. They aren’t going to have an official room or rooms or building for this event, so instead, they’re inviting random mobs of the kind of people who want to hear Coulter or Cernovich to show up and march around the campus. He’s nurturing this narrative that they were unjustly denied official space by Berkeley to fuel resentment. His little gang of neo-Nazis will wander around, being nasty, and when Berkeley rightfully cracks down on them, he’ll howl about persecution.
The alt-right thrives on the idea that it is being oppressed by violent leftists, a narrative that was in danger of dying out after a white supremacist killed a peaceful counter-protester and injured many others with a terrorist-style attack in Charlottesville. With his Berkeley event, Yiannopoulos has created and nurtured an atmosphere of right-wing grievance and anger — and now his gathering will happen outside, on the streets, with maximum opportunity for violent clashes between right-wing racists and counter-protesters. You might almost think that was how he designed it.
As if disrupting the work of the university is something Nazis should be allowed to do.
Next up: scientists will discover that the skin texture under your tattoo will change with age, that the shape of your body can distort the shape of your tattoos, and most horrifyingly, that people with tattoos have pigmented inks permanently discoloring their skin!
I’ve mentioned before that I don’t use the classroom to proselytize atheism. I have a job to do, and that is to help the students learn biology, and that’s all I care about — that they graduate after a few years and understand the concepts and can apply them, and if can do that while believing in Jesus or Allah, that’s just fine.
There’s another thing I don’t do, and that is penalize them for their health or situation. You’ve got clinical depression or your grandmother died or you had a nasty break-up with your romantic friend? I’ll make what accommodations I can, because I want you to get through all of that and learn biology. That’s all I can judge you on, is your mastery of the material, but I will welcome any changes that can help you out.
But all too often I run into non-academics (and sometimes even academics) who don’t understand this basic idea, that we’re supposed to help our students learn. So someone like Margaret Wente can write drivel like “Why treat university students like fragile flowers?”
The first answer is that we don’t. We have standards that have to be met in order to pass a course, and they’re not “be free of mental health concerns” or “have a stable family life” or “be rich enough that you don’t have to work part-time”. If you have an illness that makes mastering the course material difficult for you, that doesn’t mean you get a free pass; it means you should talk to me and I’ll do what I can to give you the opportunity to learn it in spite of your handicap. My job is to make all the flowers blossom, not to make half of them wither if they need a little extra watering.
However, there are things that Wente objects to.
Today, any proper university has registered therapy dogs to cheer you up. If exams have you down, drop in for a lick and a cuddle and you’ll feel better in no time. And if you’re too depressed because of Grandma, no problem. The disability office will provide you with a private room and extra time to write your final. Your professor never even needs to know.
Today, colleges and universities are highly concerned with the mental well-being of their students. Student distress, we’re told, is at an all-time high. It’s the pressure. The competition. Social media. Career anxiety. Long commutes. Money worries. Cyberbullying.
Therapy dogs are bad? Why? I want a therapy puppy to visit when grading gets me down! I suspect students learn better when they’re less stressed. All I care about, remember, is student learning.
I have students who take their exams at our office of student learning. We have students with agoraphobia, with test anxiety, who are easily distracted, who have language issues and need extra time. Why shouldn’t they get an environment that reduces those concerns and allows them to demonstrate their knowledge better? Why does Margaret Wente think learning has to be a stress test?
Meanwhile, the definition of “disability” – originally used for physical issues – has expanded beyond recognition. Now, it includes not only learning disabilities, but all manner of mental, social and cognitive disorders – anxiety, depression, OCD, ADHD, PTSD and the like. These may also require special accommodation. As a consequence, universities now routinely give students extra time to write exams and finish assignments. But not all professors are happy about this. But it’s not up to them any more – it’s up to the ever-expanding disability bureaucracy.
Wait. So we should accommodate ex-military students, for instance, who’ve had an arm blown off, because that’s a visible injury, but students with bodies intact but suffering from PTSD don’t count? Why? If my university provides the resources to reduce anxiety for anxiety-prone students, why shouldn’t we take advantage of it? It’s not as if anxiety, depression, OCD, ADHD, or PTSD make you stupid and incapable of learning cell biology or genetics; it means there are extra hurdles for you to overcome, and hey, if we can clear away the barriers to learning, I’m all for it.
But they get extra benefits, like more time to work on an exam, and that’s not fair! It’s also not fair to be afflicted depression or migraines or PTSD. We’re not demanding that every student be equally traumatized to create a level playing field, you know. The mistake is to think of education as a game where there are winners and losers rather than an experience in which we try to make sure every single student comes out at the end with more knowledge. It’s not a competition.
Wente finds someone who shares her barbaric attitudes.
Bruce Pardy, a law professor at Queen’s University, thinks the accommodation industry has gone too far. Giving someone with mental-health problems extra time to write an exam doesn’t level the playing field, he says. It simply tilts the playing field against everybody else. As he wrote recently: “The purpose of exams and assignments is not merely to test knowledge, comprehension, and analytical ability but to do so under conditions that require poise, organization, forward planning, and grace under pressure.” He says it’s like letting someone with a limp start at the 20-metre mark in a 100-metre race. The results are meaningless.
Stop with the “playing field” bullshit already! It’s not a race. It’s not a contest. I’m not trying to determine who “wins” in my cell biology class. I do test “knowledge, comprehension, and analytical ability”, because I want the students to be prepared for the next course in the sequence, or for graduate/professional school, or the workplace.
If you want to demand grace under pressure, though, I can cover that. I’ve got students who are working two jobs to pay for college. I’ve got students from broken homes. I’ve got students who were poorly served by their high schools who are working twice as hard to catch up. If we must analogize it to a race, these are students who start 20-meters behind the other students, and Pardy is complaining that we are trying to help them get to the starting line before the starting gun. We’re still going to insist that they make it to the finish line to get credit, and we even evaluate them on their performance. To decide a priori that the person with the limp can do nothing to get around the meaninglessness of their efforts is heartless and wrong.
I have no idea who Wente is, but I’m going to guess she’s conservative, and the Canadian version of a Republican. The callous disregard for others’ situation, the lack of empathy, and the inability to imagine the utility of helping all to succeed, rather than just the “winners”, is a giveaway.
For many years, I’ve been critical of attempts to suppress hate speech, especially on the part of governments. In Germany and Austria, for instance, it’s illegal to deny that the Holocaust happened. Back in 2006, I went on record to say that I didn’t think David Irving should be imprisoned by Austria for denying the Holocaust, despite the fact that he’s wrong, his lies are actively harmful and encourage anti-semitism, and he’s a vile human being who I’d be quite happy to see punched in the face, repeatedly.
I was of the opinion at the time that these dangerous viewpoints need to be voiced if they exist, so that historians and scientists can actively combat them with the truth and so that all people can see what kind of ignorant bigotry is still out there.
I still think that argument has merits, but I’ve definitely softened my stance in the decade since, as Naziism rears its ugly head once again here in the United States. And I’ve pretty much always been of the opinion that aside from governments, private individuals and businesses have every right to censor whatever speech they want to.
That includes Reddit, where overall bigoted and offensive “edgy” speech tends to run rampant. Two years ago, the company decided to try to put a dent in that by banning some of the most obviously disgusting subreddits, with the most popular being r/fatpeoplehate and r/coontown. It’s pretty obvious from the names who the targets were: fat people (usually women) in the former and black people in the latter.
Of course, once these popular subreddits were banned, there were still plenty of smaller subreddits with similar viewpoints that continued to exist under the radar. So what happened when the ban was put in place? Did those users just relocate to other subreddits, to continue the same hate speech?
That’s the question that a researchers at Georgia Institute of Technology recently asked, and then tried to answer with science! One of the fun things about evaluating social media is that there’s so much juicy data to sort through. All you need is a smart question or two and good algorithm for sorting through it. In this case, the researchers built their algorithm by identifying key words used in all posts made in the r/fatpeoplehate and r/coontown in 2015. They compared posts made in the two hate subreddits to posts made throughout the rest of Reddit, automatically highlighting all the most frequently used unique words. These included slurs and also words tangentially related to bigoted arguments, like “IQ” and “hispanics” for r/coontown. The list of words was then manually pared down to just the most bigoted slurs. The researchers could then evaluate Reddit for instances of those words — it’s not perfect, since you can be racist without ever uttering a slur, and of course you can use slurs in an academic way to discuss the slur itself. But when you have such a large data set, it works for noting larger patterns.
Here are the patterns they saw: over 40% of FPH and CT users deleted or abandoned their accounts, compared to 20 to 30% of Reddit users in a matching control group. Among those who continued posting to other subreddits, their hate speech decreased by 80 to 90%. About 1,500 subreddits received influxes of FPH and CT “migrants” after the ban. However, the hate speech in those subs was mostly unaffected, meaning those “migrants” didn’t come in spewing the same bigotry they were talking in their hate subs.
The researchers conclude that the ban “worked,” in that Reddit decreased the overall amount of hate speech without spreading the “infection” that was FPH and CT.
But the researchers also point out that the ban most likely made these users someone else’s problem. Many of them went to other websites where they could congregate and share the same bigoted ideas they were batting around on Reddit, meaning that the ban didn’t “work” to make the Internet, in general, a safer or better place.
That said, it’s worth remembering the other research we have on the danger of “echo chambers” in radicalizing people with violent and bigoted ideas, like ISIS and white nationalists. Taking away one prominent echo chamber, where participants have ample opportunity to lure in new gullible minds, may make a real difference after all.
In this episode of the Waking Up podcast, Sam Harris speaks with Ken Burns and Lynn Novick about their latest film, The Vietnam War.
Ken Burns and Lynn Novick are two of the most accomplished documentary filmmakers of our time. Their work includes The Civil War, Jazz, Baseball, The War, along with many other acclaimed films. Their most recent project is the ten-part, 18-hour documentary series, The Vietnam War, which tells the epic story of one of the most consequential, divisive, and controversial events in American history. Ten years in the making, the series includes rarely seen and digitally re-mastered archival footage from sources around the globe, photographs taken by some of the most celebrated photojournalists of the 20th Century, historic television broadcasts, evocative home movies, and secret audio recordings from inside the Kennedy, Johnson, and Nixon administrations. The Vietnam War also features more than 100 iconic musical recordings from greatest artists of the era.
Who is more generous: atheists, or Christians? As I’ve discussed in the past, our society’s stereotype is that atheists lack morality, and so we expect Christians will be more charitable. I’ve also discussed how this stereotype is probably wrong.
I’ve also discussed stereotype threat in the past, but I’ve only really talked about it as it relates to women and people of color. To recap, stereotype threat is the psychological phenomenon that occurs when a person identifies as a member of a group with prominent stereotypes about it. That person is more likely to devote time and energy to worrying about negating (or in some cases living up to) that stereotype, meaning that they’re less likely to perform well on tasks that remind them of that stereotype. The classic example is that female mathematicians perform worse on math tests after being reminded of the stereotype that women are bad at math, and even more shockingly, they do worse on math tests even when they’re only reminded they’re women (by filling out a form saying whether they’re male or female before taking the test).
I know there are many atheists (particularly of the white, straight, male variety) who balk at the idea of stereotype threat despite the fact that it’s been fairly well-proven by countless peer-reviewed research by now, so I’m interested to know how they feel about this new study.
In the study “Generous heathens? Reputational concerns and atheists’ behavior toward Christians in economic games,” published last July in the Journal of Experimental Social Psychology, researchers report playing the “dictator game” with both atheists and Christians. In this classic psychological game, subjects are given a small amount of money and told that if they want, they can share any portion of the prize with a second subject, who has no say in the matter.
Before running the games, the scientists asked a group of people to guess the results. Overwhelmingly, people thought that atheists would give less money to Christians.
In the first run of the actual game, the religion (or lack thereof) of both the “dictator” and their partner was revealed to each. In a second run, the religion of the “dictator” was concealed.
When atheists knew that their partner knew they were an atheist, they were more likely to be equally fair to both atheists and Christians. Christians, however, were more giving to fellow Christians.
Does that mean atheists are more generous than Christians? Nope! When the “dictator” believed their religious preference would remain concealed from their partner, atheists and Christians both gave more to partners from their respective groups. In other words, atheists acted just like the Christians did in the first study.
This suggests that when atheists know they are seen as “ATHEISTS,” they are more likely to work to combat the negative stereotype they know they’re saddled with. That’s an example of stereotype threat as experienced by atheists. In this case, the result may be considered “good” in that the atheists were made more fair, but it reveals a deeper problem. By struggling to appear more ethical when under the spotlight, atheists may end up experiencing negative psychological health, or they may be dissuaded from pursuing careers that require the general public to think of them as ethical or upstanding. I mean, how many American politicians are open about being atheists? Even Mark Zuckerberg recently posted that he’s no longer an atheist and in fact is super into God now, since he’s really obviously interested in a presidential run and just about no one is ready for an atheist president. If you’re an atheist in prison looking to be let out early on good behavior, you might consider a similar miraculous conversion.
While this is interesting research, it is just the tip of the iceberg. I’m looking forward to more studies that examine how stereotype threat may be affecting the way atheists live. If we address it now, we may be able to run an actual atheist to beat Mark Zuckerberg in 2020.
German psychologists recruited 238 subjects from the United States to answer questions on how many conspiracy theories they believe in, and also asked them to rate how important it was for them to be distinctive. They found that people who wanted to feel unique were more likely to believe in conspiracy theories. (They also found that belief in one conspiracy theory was correlated with belief in a bunch of conspiracy theories, something that the literature has shown repeatedly to be the case — if you can believe we never landed on the moon, you can also believe that 9/11 was an inside job.)
The researchers then conducted a follow-up test with another 465 Americans, which supported the first study.
Of course those studies show correlation but not causation. Do people believe in conspiracies because they want to feel unique, or is it just a coincidence?
To answer that, they set up a third test, and this is where things get weird. They told about 300 American subjects about a German conspiracy theory, in which people believe smoke detectors produce “hypersound,” which is damaging to humans. They told half the subjects that the conspiracy theory was a popular one believed by the majority of Germans, and they told the other half that only a small percentage of Germans believed it.
Among the subjects who believed in a lot of conspiracy theories, they were actually more likely to believe the smoke detector theory if they thought it was unpopular, as opposed to popular. That’s like hearing 4 out of 5 dentists recommend brushing your teeth and then throwing out your toothbrush.
But that fits if you’re the sort of person who wants to feel unique, and believe it or not you’ve probably experienced this feeling yourself. Have you ever liked a band or a television show, but then felt irrationally annoyed when it gets popular? I mean, I’ve been a hipster since the 90s, so I know that pain all too well.
Here’s the crazier part of that last study: after it was over, the researchers told participants that they had made up the smoke detector conspiracy theory. Yet a full 25% of them still believed it, and that number correlated with those people with a high need to feel unique. I mean, it makes sense — the least popular conspiracy theory would definitely be the one that was literally just made up on the spot for the study you participated in. Get on that train quick! Soon Dr. Oz will be warning people to disable their smoke detectors and it’ll be way too mainstream to bother with.
I’ll mention that this paper happened to be published at nearly the same time as another study from another team of researchers that showed the exact same effect: people who believe conspiracy theories have a desperate need to be unique. I love it when scientific research comes together!
Continuing my previous post (which I unfortunately must do in installments due to my schedule) here is another common criticism of grading participation.
2. It is nebulous.
What exactly does participation mean? A strong criticism of participation-based grades is that participation itself is a nebulous concept. There are a wide variety of actions that could be considered ‘participation.’ Once again, this is not a unique problem of participation grading and is resolved, like all assessment, by an educator choosing which specific skills to assess. The problem with nebulousness in terms of participation is the way that many educators actually implement their participation grades. Often ‘participation’ is just a line-item on a syllabus and does not even have a dedicated rubric, which could altogether fix this problem. There may be a sentence or two that mentions things like speaking during in-class discussions, but it is rarely fleshed out more than that.
This degree of vagueness does pose a real problem. It creates room for a teacher to assign grades without following consistent criteria. The issue here is more a matter of the difference between theory and practice. In theory, educators could construct robust and explicit rubrics that nail down precisely what they mean by ‘participation’ within the context of their classrooms. Yet, many do not and so participation itself becomes too unclear and arbitrary to be useful. That brings the next common criticism of participation grading:
3. It is misused to justify bias.
(Once again, this must unfortunately be addressed in a follow-up post.)
In this episode of the Waking Up podcast, Sam Harris speaks with Thomas Metzinger about the scientific and experiential understanding of consciousness. They also talk about the significance of WWII for the history of ideas, the role of intuition in science, the ethics of building conscious AI, the self as an hallucination, how we identify with our thoughts, attention as the root of the feeling of self, the place of Eastern philosophy in Western science, and the limitations of secular humanism.
Thomas K. Metzinger is full professor and director of the theoretical philosophy group and the research group on neuroethics/neurophilosophy at the department of philosophy, Johannes Gutenberg University of Mainz, Germany. He is the founder and director of the MIND group and Adjunct Fellow at the Frankfurt Institute of Advanced Studies, Germany. His research centers on analytic philosophy of mind, applied ethics, philosophy of cognitive science, and philosophy of mind. He is the editor of Neural Correlates of Consciousness and the author of Being No One and The Ego Tunnel.
A new study claims that the main reason people walk their dogs is because it makes them and their dogs happy. As a person who has a dog, this surprised me. The main reason I walk my dog is because if I don’t, he will literally shit all over the carpet. This will make both me and my dog very unhappy — me because I have to clean it up, and my dog because he has to deal with me angrily not giving him snuggles for up to 30 minutes after the incident in question.
The study was qualitative, meaning that it had few participants (only about a dozen dog-owning families) but was very in-depth, using interviews with each subject to look at the perceived benefits of walking a dog. The researchers found that people weren’t likely to take their dogs on walks due to keeping up their own (human) physical fitness, though that is a secondary benefit. People were more likely to want to go on walks because it makes the dog happy, and in return that makes the human happy.
I wasn’t originally going to do a video about this, as I read it over and it seemed there was nothing particularly ground-breaking about it. But the I took Indy for a walk, and while on that walk I decided to make a slightly more personal video.
This is Indy. I got him 8 months ago when he was just a tiny furball. It was January, I had just gone through a painful breakup in which my ex stole my cats, and I was living on my own for the first time in years. Oh, and a random guy was making YouTube videos talking about murdering me. Yeah, that was a fun time.
I get extremely attached to animals, and I already have anxiety and depression, so I was in a pretty bad state. I went to my psychiatrist and got an increase in my SSRI, escitalopram. I also described what I was going through and asked her if she thought an emotional support dog would be a good idea for me. I felt like I needed an animal in my life but I couldn’t bring myself to “replace” the cats I was still mourning. Plus, I’ve always wanted my own dog, ever since I was a kid, and a dog would help me feel a bit safer in the event that a random man from YouTube showed up to kill me.
The psychiatrist agreed that a dog could probably help me out. So, a few weeks later, I happened across a woman who had a puppy she couldn’t keep. The puppy was living in a car and she didn’t want to give it to a shelter. She said he was a German shepherd and lab mix, which was exactly what I wanted: big, smart, cuddly, but intimidating.
Spoiler alert: Indy is almost definitely not several of those things. To make matters worse, when he was a puppy there were times when I hated his guts. Puppies are basically adorable machines that convert kibble and water into poop and pee, which they deposit everywhere. Everywhere! At all times of the day and night! I had to take him out for walks constantly in the hopes that he’d accidentally pee out there instead of on my bed. Walking him was a horrific chore, especially because it was a soggy, cold winter.
In the evening I’d be exhausted, and all I would want to do is lounge around in my underpants getting super high and/or drunk (remember, I was depressed). But I couldn’t do those things — I had to stay dressed and sober because every two hours I’d have to take this little shit machine outside. And then he wouldn’t pee, and wait until we got inside to do that, and I’d be up all night scrubbing the floor. For awhile I wondered what the point of an emotional support dog is when he makes you more anxious and keeps you from doing the things you do to relax.
But eventually I realized that the things I did to relax were ultimately unhealthy. It was harder to keep my bra on and drink water instead of wine, and it was harder to go outside for a walk instead of watching Netflix, but it was better for me. And if Indy hadn’t been there to force me to do those things, I would have been way worse off now.
But now Indy is the puppy equivalent of a tween, and he’s a good one. He no longer has accidents in the house. He rarely eats things he’s not supposed to, like napkins or USB cables. And he makes me smile pretty much every time I look at him. I mean, seriously, look at him. He’s the living embodiment of sunshine.
And he no longer needs walks every two hours, but he does manage to get both of us out the house at least five times a day for walks, runs, and games of fetch. And sure enough, yeah, it makes him happy and it makes me happy.
I didn’t get the big, smart dog I thought I wanted, but I did get the emotional support dog I needed.
In this episode of the Waking Up podcast, Sam Harris speaks with Joseph Romm about how the climate is changing and how we know that human behavior is the primary cause. They discuss why small changes in temperature matter so much, the threats of sea-level rise and desertification, the best and worst case scenarios, the Paris Climate Agreement, the politics surrounding climate science, and many other topics.
Joseph Romm is one of the country’s leading communicators on climate science and solutions. He was Chief Science Advisor for “Years of Living Dangerously,” which won the 2014 Emmy Award for Outstanding Nonfiction Series. He is the founding editor of Climate Progress, which Tom Friedman of the New York Times called “the indispensable blog.” In 2009, Time named him one of its “Heroes of the Environment,” and Rolling Stone put him on its list of 100 “people who are reinventing America.” Romm was acting assistant secretary of energy in 1997, where he oversaw $1 billion in low-carbon technology development and deployment. He is a Senior Fellow at American Progress and holds a Ph.D. in physics from MIT. He is the author of Climate Change: What Everyone Needs to Know.
I’ve mentioned Peter Thiel before — he’s the billionaire tech bro who supports Donald Trump and paid millions to shut down Gawker Media because he didn’t like their liberal bias. He’s also the guy who maybe thinks that injecting yourself with the blood of the young will keep you alive forever or something. Yeah, he’s basically a vampiric Bond villain.
He’s just upped his Bond villain game by conducting unauthorized scientific experiments on humans on an island in the Caribbean in order to escape the ethical oversight the United States requires. That’s right, he’s gone full Dr. Moreau.
OK, so he’s not necessarily splicing together humans and hamsters or anything but he is injecting the herpes virus into people without any kind of safety oversight. Thiel is an investor in a company that claims to have a vaccine for genital herpes. But because the company is run by Libertarians, they hate the Food and Drug Administration and are hoping to prove that all it does is prevent breakthroughs in medical science. They claim that were the FDA around in the 1930s, we never would have discovered a vaccine for polio. If they manage to release a successful herpes vaccine, they’ll have made the world a better place AND proved the FDA pointless.
There are a number of problems with that, which I’ll get to in a second. First, the research: Thiel’s company, “Rational Vaccines” (dear lord) teamed up with Southern Illinois University to fly 20 individuals with genital herpes to St. Kitts, an island in the West Indies where you can do pretty much whatever you want, medically speaking.
According to Rational Vaccines, they gave the vaccines to the patients and found that it reduced the number of flare-ups they had, albeit with large patches of inflamed skin as an occasional side effect.
They submitted their study to peer reviewed journals, none of which accepted it because the reviewers were horrified by fact that the researchers were experimenting on humans with no oversight. So, we have no real idea of how successful the vaccine really was at preventing flare-ups.
Now let’s get back to the idea that this vaccine can’t happen with the FDA in place, and that the polio vaccine wouldn’t have been discovered either. First of all, there are already at least three other herpes vaccines that are being perfected in the US by universities and clinics working with the FDA as usual. Experts expect one of them to be submitted to the FDA for the final okay after completing trials next year. So yes, it is possible to create this vaccine using all the standard safety protocols that are in place to keep people from dying.
And speaking of people dying, let’s talk about the polio vaccine and the initial attempts to create it. The first attempts were in 1935, when two different teams royally screwed up and ended up giving an unsafe vaccine to thousands of children, several of whom died. More were paralyzed for life or suffered other terrible side effects.
That’s what happens when you give an improperly tested vaccine to humans without safety oversight.
And by the way, the FDA did exist back then. They just didn’t have the right policies in place to prevent tragedies like that — the history of the FDA is written in human tragedy, as with each horrific misstep changes were made to prevent them from happening again. Are you familiar with the thalidomide tragedy of 1959? That’s where thousands of babies were born with severe deformations because their mothers took a drug for nausea. Only Americans didn’t experience that horrific tragedy because the FDA didn’t allow thalidomide on the market here.
Yes, red tape sucks, and sometimes it can be abused and misused. But by and large, the FDA has saved countless lives by setting a standard for what products we can give to human beings, and setting a standard for how we treat human beings in the clinical trials leading up to the release of a drug. And that’s a good thing, whether the Libertarians want to admit it or not. In fact, it’s so obviously a good thing that I wonder how much of their hatred of the FDA is simply based on wanting to get the most bang out of their pharmaceutical buck. And they’re hoping the American people will fall for their sob story about saving people from herpes. I hope you don’t fall for it. We have other vaccines, and no one has to die to get them.
In this episode of the Waking Up podcast, Sam Harris speaks with Max Tegmark about his new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI, and other topics.
There are better and worse ways to assess students and participation is no exception. Some teachers use a “participation grade” as a justification to assign grades based on how much they like students. Obviously, that is not a good grading practice. However, there are others who use participation to assess valid and measurable things. I’m going to go over some commonly given strengths and weaknesses of grading participation as well as counterarguments for each (hopefully from a skeptical viewpoint). I’ll start with the criticisms.
1. It is subjective.
This is true, but almost all forms of assessment have some degree of subjectivity to them. Indeed, if we limited ourselves to only assessing things that can be measured completely objectively, we would often find ourselves assessing the wrong things. Much like the often cited problems with standardized testing, there are valuable things that can only be assessed subjectively and an overemphasis on objectivity results in an overemphasis in things like rote memorization. Treating objectivity as the only thing that matters is falling into the same problem as scientism. Subjectivity itself isn’t the problem with participation grades.
Some argue that participation is particularly subjective and is therefore more heavily influenced by teacher’s biases. One retort is that the teaching is a profession for this very reason. Teachers undergo years of training and certification precisely to develop skills in making good decisions regarding subjective things. Still, teachers do have biases and are affected by them as much as anyone else. They can have a profound effect on grading participation. However, I have found this is only the case when participation is being assessed poorly. A poor implementation with over-reliance on subjective measures is largely created by another problem attributed to participation:
2. It is nebulous.
(…which my next post will address. I unfortunately have an excessively busy life so my two readers will have to wait.)
Regarding my previous post on the lack of accessibility of critical thinking information, even Carl Sagan’s Cosmos has this problem. Despite being hailed as supremely understandable for general audiences (both the book and the series), I found that it also posed too many challenges in its basic linguistic accessibility to be useful in my classes.
Before you point out that I gave up without actually trying it out in my class, I do have evidence that my assessment of its accessibility was proven right. After I decided not to use Cosmos in my teaching, another teacher at my school did actually use several “easy” passages from the book. Students were utterly confused by it. Some even bought copies in their native language, and despite reading it together with some of the highest-level students in their grade, they were unable to understand it.
You might be thinking that I am projecting my own students’ ability on others and you are right to some extent. But, I must point out that there’s a reason why atheists and skeptics tend to be better educated than average. Most of the arguments for critical thinking are exclusively presented at an advanced level. There’s a cognitive bias called “curse of knowledge” in which we assume that others have the same background knowledge we do. Assuming that Sagan is going to be suitable for general audiences is falling into this bias. To many, he’s just not. He is certainly accessible in terms of not needing a background in astrophysics to understand, but there is still a prerequisite of prior knowledge.
When skeptics and atheists ignore this, they are missing a huge audience. It’s the echo chamber that many in the movements talk about. We spend a lot of time interacting with people and data that are on our level, it is easy to forget that the majority are elsewhere. Most people don’t have the basic background knowledge that skeptics take for granted, and it can take years to build up.
The language issue is even larger. Though the skeptical movement is not exclusive to English, it seems to be predominantly in English. Even international skeptics groups often use English sources. (Not to mention English’s vast over-representation in scientific publications.)
There is estimated to be over a billion non-native English speakers in the world. By presenting things in the way that we do, we fall to the curse of knowledge bias and create barriers of inaccessibility to literally hundreds of millions of people who might otherwise benefit from a more skeptical mindset. By keeping the dialogue at a high intellectual level, we might actually be shooting ourselves in the foot.
However, I must address a counterargument to my position, the issue of oversimplification. In making things accessible, simplification must occur. This presents a problem with truly complex topics (which, as it turn out, almost every topic is). To be understandable, ideas need to be explained in more basic terms but in doing so, there is a risk that they could be simplified to the point of being incorrect. Which case is worse: A person misunderstands something because it was too complicated, or a person misunderstands something because it was oversimplified?
To some extent, we need oversimplification anyway. Everyone can’t know everything about every topic. But, topics can’t actually be simplified without losing their nuance. By definition, that’s what simplification does. There’s a huge problem in the skeptic movement with a lack of understanding about nuance. But a big part of this problem relates to the fundamental problem about certain information being too difficult for most people to understand.
I hear skeptics say “we need better education in critical thinking” all the time. When I try to do that for my students, I find myself on an island, surrounded by water that I can’t drink. I wind up needing to either re-create all the materials that are out there to be basically understandable, or create them from scratch. We need to rethink the concept of general audiences and start prioritizing them in our work.
Since “accessibility” has so many different meanings in education, let me start by clarifying that I am referring to students (and general audiences) being able to access the meaning of information they encounter. This seems to be a weak point in the skeptic, atheist, and critical thinking movements, and something I have failed in as well.
There are two main subdivisions of this point: accessibility in terms of having sufficient background knowledge or expertise in a particular subject, and accessibility in terms of the construction and structure of the language.
In struggling to find materials to teach critical thinking to ESL students, I have realized that the majority of the great information and arguments are completely inaccessible on both counts. For example, to explain responsibilities in giving evidence, quite a few sites directed me to this video. For me, the topic is explained clearly in an easy to understandable way. However, I am taking for granted my educational background which makes this accessible to me.
That video begins with this sentence:
“Imagine someone tells you that somewhere beneath the surface of Pluto there’s a tiny werewalrus that sends them psychic messages every midnight while juggling skulls on an indigo plinth.”
Though I teach at a school for very high-level students who specialize in learning foreign languages (and specifically English), the wording of that sentence is extremely inaccessible to them. My students, for example, mostly don’t know words like “psychic” or “indigo,” much less “plinth” or “were-anything.” These just are not everyday terms for most people.
The video also later goes on to address related problems in religious apologetics. Without any prior knowledge of these kinds of arguments, it is extremely difficult to fully grasp what the video is saying.
None of this is to say that the video is bad. I spent a long time trying to figure out how I could find a way to use it in my class because I think it is excellent. However, I just couldn’t find a feasible way to make it accessible for my students. The actual topic is fully within their ability to grasp, but the message is constructed in a way that they cannot get.
Brian Dunning’s video Here Be Dragons also came with similar recommendations. Its description claims:
“Here Be Dragons is a free 40 minute video introduction to critical thinking. It is suitable for general audiences and is licensed for free distribution and public display.”
As above, I found that “general audiences” does not include anyone remotely resembling my students. The video’s third sentence was thus:
“Hypothesized dragons seemed a good enough explanation for what would otherwise be ungraspable.”
Once again, I am not claiming this video is bad, just inaccessible. Not just in terms of vocabulary, but the actual construction of sentences like this are extremely difficult to understand to non-native speakers. Many of my co-workers, who have advanced degrees in English education, often ask me to explain sentences like this. They ask things like “how is ‘seemed a good enough explanation’ different from ‘seemed to be a good enough explanation’?”
These subtle distinctions are barriers in understanding that most students (and in fact, most people) do not cross. They think “Is it worth taking the time to really figure this out? Probably not.”
We have a situation in which there is an abundance of great skeptical material, but a dearth of truly accessible material (of which this post is certainly not).
There is more I have to say on this, but I have a hard time limit so I will have to address the counterarguments in a further post where I talk about the curse of knowledge bias.
Here’s a paper, and associated website, that we launch today: we have assessed, and then ranked, all the biggest drug companies in the world, to compare their public commitments on trials transparency. Regular readers will be familiar with this ongoing battle. In medicine we use the results of clinical trials to make informed treatments about […]
By now I hope you all know about the ongoing global scandal of clinical trial results being left unpublished, and of course our AllTrials campaign. Doctors, researchers, and patients cannot make truly informed choices about which treatments work best if they don’t have access to all the trial results. Earlier this year, I helped out […]
We’re great at spotting biases in others, but absolutely incompetent at finding them in ourselves. Even if we know exactly what to look for and we’ve got a ton of intellectual humility, noticing the effects of our own biases on our own thoughts is like looking for colored glasses while wearing them.
The introspection illusion is a cognitive bias in which we think we understand our own mind (and therefore find other minds to be unreliable). It’s a kind of backwards justification. We have a feeling about something, and then rationalize why our feeling would be justified. We think our intuitive and irrational impressions are things we came to by deep, reasonable thought.
When we feel like we’ve got a strong, rational argument for our own thinking, any different opinion appears to be obviously ill-reasoned, or even sinister.(1) For example, you hear a fact about a field outside of your study that sounds ludicrous to you. At this point, you don’t realize that you’re building a strawman from your own misunderstanding about the fact. Out of context, it appears to have no support and just be crazy. It doesn’t seem like a hasty generalization to assume that the whole field, believing something this crazy, must therefore be fundamentally flawed. Clearly, they have not thought this through. We, on the other hand, have.
Here, the introspection illusion took hold. It started with a gut reaction to something, and as the brain built its own support for the conclusion it had already reached, the feeling snowballed into a rationalization. Our own thoughts seem clear and justified, so everyone else must just be crazy. Perhaps we should pity them, because they are victims of this terribly devious indoctrination into this totally bogus field of Gend– *ahem* I mean, this totally bogus field in this purely hypothetical example.
Recent atheist’s missteps aside. The introspection illusion seems to be a key culprit in a number of problems that have plagued the skeptics movement for years, as well as a reason why some students reject any hint of critical thinking in the classroom out of hand. Consider the poorly constructed arguments against evolution that are repeated even to this day, like dogs turning into cats or the continued existence of monkeys. These topics have an implication that some people don’t like. That negative feeling towards that implication immediately transforms into a backwards justification, arguing against a massive strawman, and the introspection illusion amplifies this feeling that we’re right and others are wrong to the point that we can’t even begin to pick apart the tangle we’ve gotten our minds into. The worst part is that, despite our belief otherwise, we can’t actually find the starting point in all of this. Whatever started this snowball is lost so far in the middle of it that we have no way to see it.
(1) Assuming that others know they are wrong and are promoting false information because they have evil intentions sounds a lot like what religious extremists, presuppositionalists, antivax advocates, and conspiracy theorists say, doesn’t it?
Robin Ince just asked if I know any epidemiologist lightbulb jokes. I wrote this for him. How many epidemiologists does it take to change a lightbulb? We’ve found 12,000 switches hidden around the house. Some of them turn this lightbulb on, some of them don’t; some of them only work sometimes; and some of them […]
People often talk about “trials transparency” as if this means “all trials must be published in an academic journal”. In reality, true transparency goes much further than this. We need Clinical Study Reports, and individual patient data, of course. But we also need the consent forms, so we can see what patients were told. We need […]
Today marks the end of an era. The International Journal of Epidemiology used to be a typical hotchpotch of isolated papers on worthy subjects. Occasionally, some were interesting, or related to your field. Under Shah Ebrahim and George Davey-Smith it became like nothing else: an epidemiology journal you’d happily subscribe to with your own money, and read in […]