My guest today is Kevin Erdmann, he blogs about economics and finance at Idiosyncratic Whisk.
Kevin has written a ton about housing, as evidenced by the titles of his blog posts. A recent one is labeled Housing: Part 239. This series is part of a larger book project that Kevin is publically drafting on his blog.
We discuss the housing bubble of the 2000s and the post-2008 housing market. I took my first undergraduate economics class in 2008, just as the financial crisis was beginning, so there’s never been a time in my economics career when people weren’t talking about this. And yet, I still have so much to learn!
Kevin makes an interesting distinction between “open-access cities” and “closed-access cities.” Closed-access cities are places like San Francisco, New York, and San Jose that have restricted their housing supplies. Open-access cities are places like Houston and Phoenix with more elastic housing supplies. We talk about these factors and how they relate to the housing boom and bust, liquidity, and central bank policy.
Kevin points out that supply-side restrictions on housing construction are necessary for demand-side factors to cause housing bubbles. That’s because in a market with an elastic housing supply, more demand doesn’t result in higher prices, it just causes more homes to be built.
...
https://www.youtube.com/watch?v=NKxUj4m4WA0
This podcast episode was originally posted on September 5th, 2014.
In this episode, Diana Thomas discusses the relationship between the Virginia School of Political Economy and the Austrian School of Economics. Diana is an Associate Professor of Economics at the Heider College of Business at Creighton University.
The Virginia School is a branch of public choice, the application of the tools and techniques of economics to the study of political actors. The Virginia School’s founders, James Buchanan and Gordon Tullock, were the first to systematically apply a rational choice framework to the study of politics in The Calculus of Consent.
Two assumptions commonly made by neoclassical economists are the “benevolence assumption” and the “omniscience assumption.” The benevolence assumption is implicit in normative analysis of what governments “ought” to do, as this assumes that political actors are motivated to maximize the common good rather than pursuing their self-interest. This assumption is challenged by public choice economists. The omniscience assumption is at play in economic models that depict the economy as being in equilibrium, whereby nobody is misinformed of or surprised by economic reality. This assumption is challenged by Austrian economists.
The omniscience assumption implies that the economy should be possible to rationally plan, an idea that Mises and Hayek debunked in the socialist calculation debate of the 1920s and 30s.
As Diana states in her paper, Entrepreneurship: Catallactic and Constitutional Perspectives, “both Buchanan and Tullock reference Mises’ Human Action as the central reference for their understanding of methodological individualism.” The Virginia and Austrian schools also share common understandings of rationality and of self-interest.
Diana draws a parallel between Israel Kirzner’s distinction between calculative and entrepreneurial action and Buchanan’s distinction between reactive and creative action. While calculative or reactive action consists in simply responding to known incentives and constraints, entrepreneurial or creative action consists in envisioning a future that is different from the present and in acting on that expectation. Kirzner applies the concept of entrepreneurship to businessmen seizing anticipated arbitrage opportunities in the market. Buchanan applies the concept of creative action to political actors attempting to reform constitutional rules.
Buchanan conceives of constitutional rules as being made behind a “veil of uncertainty” since it is beyond political actors’ ability to predict in precisely what situations the rule will be applied, and whether their own self-interest will be served or hurt in those situations.
Diana believes that political action is m
...
https://www.youtube.com/watch?v=c7aOPRCI-OM
This podcast episode was originally posted on December 23rd, 2016.
Petersen: You’re listening to Economics Detective Radio. My guest today is Judy Stephenson of Oxford University’s Wadham college. Judy, welcome to Economics Detective Radio.
Stephenson: Thank you very much. It’s nice to be here.
Petersen: So, our topic for today is economic history. Specifically we’ll be looking at some interesting research Judy has done on wage rates in the early modern period in London. This period is particularly interesting because it’s the start of the Industrial Revolution which leads to a dramatic increase in the growth living standards and of technology and that trend of course is what has shaped our modern world and made it different from the world of the past. So, it’s very important of course to understand this period if we want to understand the world as it is now. So Judy, start by giving us historical background. What was the world like in the period you study?
Stephenson: Well, I work mostly on researching London, so urban environments. And London is very developed in this period between about 1600 and 1800. And London becomes the biggest city in the world during this period and as the biggest city in the world it’s hugely vibrant, some of the largest merchant houses in the world are there, banking is advanced and developing. Most of the occupations of London are tertiary or service sector, even at this early date.
The river is a huge source of both transportation and work, the port is where much of the capital, both physical and financial, from around the world comes through the city, and the professions and bureaucracy are well established in London in this period. It’s growing at all levels of society, from the very poorest to the very richest exponentially. So, if you look at the population growth overall in the U.K. in the late 17th century from 1500-1600 to 1700, that actually is pretty much stable or slightly declining. But the population of London grows by a third or something in that period.
London is this hugely vibrant commercial social and cultural center and it’s pretty much overtaken Amsterdam, which has come to the end of its golden age in the mid 17th century, right at this period. So, although the world more generally and in a wider sense can be typified by pre-industrial or agrarian values, London is very commercial in this period.
Petersen: Okay, so, if I were to get in a time machine and go back in time, maybe London would be more familiar to me, would seem, feel more modern than almost any other place.
Stephenson: I think it would be very familiar to you the way of getting around would be a sedan chair or a carriage. You can hire them on the street, in fact you send your boy out to get one. It looks very li
...
https://www.youtube.com/watch?v=Pp23ZWwPDHI
Here's what you can expect to encounter in your undergraduate economics degree from a North American university. Post a comment if you have specific questions!
...
https://www.youtube.com/watch?v=YwlTnRsj72s
This podcast episode was originally posted on June 30th, 2017.
http://economicsdetective.com/2017/06/replicating-anomalies-financial-markets-hou-xue-zhang/
In this episode, I have three guests on the show with me: Kewei Hou of Ohio State University, Chen Xue of the University of Cincinnati, and Lu Zhang of Ohio State University.
Kewei, Chen, and Lu have coauthored a paper titled “Replicating Anomalies,” a large-scale replication study that re-tests hundreds of so-called “anomalies” in financial markets. An anomaly is a predictable pattern in stock returns, or stated differently, it is a deviation from the efficient markets hypothesis. Their abstract reads as follows:
The anomalies literature is infested with widespread p-hacking. We replicate the entire anomalies literature in finance and accounting by compiling a largest-to-date data library that contains 447 anomaly variables. With microcaps alleviated via New York Stock Exchange breakpoints and value-weighted returns, 286 anomalies (64%) including 95 out of 102 liquidity variables (93%) are insignificant at the conventional 5% level. Imposing the cutoff t-value of three raises the number of insignificance to 380 (85%). Even for the 161 significant anomalies, their magnitudes are often much lower than originally reported. Out of the 161, the q-factor model leaves 115 alphas insignificant (150 with t less than 3). In all, capital markets are more efficient than previously recognized.
We discuss the process of replicating these anomalies, issues involving the use of equal-weighted vs value-weighted returns, and the problems of p-hacking in finance research.
Works Cited
Hamermesh, Daniel S. 2007. “Replication in Economics.” Canadian Journal of Economics 40(3):715?733.
Kewei Hou, Chen Xue, Lu Zhang; Digesting Anomalies: An Investment Approach. Rev Financ Stud 2015; 28 (3): 650-705.
Hou, Kewei and Xue, Chen and Zhang, Lu, Replicating Anomalies (June 12, 2017). Charles A. Dice Center Working Paper No. 2017-10; Fisher College of Business Working Paper No. 2017-03-010.
...
https://www.youtube.com/watch?v=QnQKTJqp_Lw
To download the episode, read the full transcript, and access all the links we mentioned, go here: http://economicsdetective.com/2016/11/space-debris-governance-economics-space-alex-salter/
To subscribe to the podcast on iTunes, go here: https://itunes.apple.com/ca/podcast/economics-detective-radio/id914356499?mt=2&uo=4&at=11lSv3
To subscribe on Android, go here: http://subscribeonandroid.com/economicsdetective.libsyn.com/rss
To subscribe on Stitcher, go here: http://www.stitcher.com/s?fid=53265&refid=stpr
So Alex, let’s start by talking about space debris. What is it and why does it matter?
Salter: So space debris is basically junk in space that no longer serves any useful purpose. So as you can imagine, since the first piece of space debris launched up in 1957—which was the rocket body from Sputnik I—a lot of orbits around the Earth, especially low Earth orbit, have become kind of cluttered with space junk. And the reason it gets cluttered is because no one has an incentive to clean it up.
It’s a problem because a lot of this stuff is big enough and moving fast enough that if it strikes something like a communications satellite, it can take it out. So the probability of a collision right now that will cause serious damage is currently low, but there are a lot of worries among scientists who study the problem that as debris occasionally collides with more debris, you get a sort of snowballing effect of the clutter. So if we’re going to get a handle on it, it needs to be earlier rather than later.
Petersen: I think intuitively it seems like the sky is so big and satellites are so small that we’d never have to worry about collisions. So why is that not the case?
Salter: So there’s obviously quite a bit of room up there, but the problem is that some orbits are more valuable than others. In particular, geosynchronous orbit, which is I think 36 thousand kilometers above the Earth, is a really valuable place for specific satellites. And also low Earth orbit is a valuable place for specific satellites. Now, there’s still a lot of room there, but it’s significantly restricted. If my communications satellite is taking up a particular orbit, your satellite can’t be in the same place. So there’s only so much of it to go around, and again, what we’re really worried about is debris colliding with something, which creates more debris, which can collide with more stuff. We’re really worried that snowball effect, which is sometimes called the Kessler syndrome after the scientist who first wrote about it.
Petersen: So the odds of a single collision might be low, but given one collision, it becomes much more likely that we’ll have two and three and four—a chain reaction of collisions.
Salter: Exactly. So right now the probabili
...
https://www.youtube.com/watch?v=ituWU1Sy1ao