Julia Galef from http://measureofdoubt.com talks about why rationalists are more likely to abandon social norms like marriage, monogamy, standard gender roles, having children, and so on. Is that a rational attitude to take?
Julia argues that you shouldn't dismiss an argument just because the person making it has some other, unrelated belief you think is wrong or weird.
(NOTE: The examples I'm giving in the video of "weird" beliefs aren't meant as my personal opinion. I'm not taking a position on whether Christianity, cryonics, or libertarianism are in fact "weird" or wrong. They're just examples of excuses I've heard other people use to dismiss someone's other arguments.)
Learn more about me at http://juliagalef.com
Julia Galef from http://measureofdoubt.com talks about the dangers of identifying yourself with a particular group, and how it can distort your thinking.
What do people mean by rationality? Julia Galef from http://measureofdoubt.com discusses the various ways understand word rationality and how different meanings relate to each other.
(ETA: Changed the title, since it was misleading!)
How is rationality like artificial intelligence? One connection is that both fields are interested in how to handle interdependent beliefs. In this video I explain why your brain is *not* like an AI, and why that means you end up believing contradictory things.
I describe two mistakes people often make when trying to solve paradoxes like Newcomb's Problem, Sleeping Beauty, and more.
POST on the "Least Convenient Possible World" principle: http://lesswrong.com/lw/2k/the_least_convenient_possible_world/
You might think you're helping someone when you offer them unsolicited criticism -- but are you really? Julia outlines 5 rules for giving criticism that's less likely to offend, and more likely to actually help.
There are two common policies for when people disagree over moral behavior: confrontation, and tolerance. I propose a 3rd alternative: engagement.
Here's the ideological turing test I mention in the video: https://en.wikipedia.org/wiki/Ideological_Turing_Test