I usually check out Jonathan Chait's blog once or twice a week and I usually ignore the "Most Viewed Stories" column to the right of the page. Recently, though, one of the items caught my eye.
"Who Was That at the End of the New Avengers Trailer, and Why Should You Be So Excited?"
The link led to a Vulture.com post about the comic book character the Vision, followed by some speculation about what part he might play in the upcoming Avengers movie. Looking over the article, it struck me that this is an amazing time to be a publicist. We have gone from publications hyping movies to hyping trailers to hyping two-second shots in trailers.
Comments, observations and thoughts from two bloggers on applied statistics, higher education and epidemiology. Joseph is an associate professor. Mark is a professional statistician and former math teacher.
Friday, March 6, 2015
Thursday, March 5, 2015
Cartoon Metalogic
Bud Grace, creator of the comic strip the Piranha Club, is a former nuclear physicist -- No, really, look it up -- and every now and then a bit of STEM humor pops up.
For investors, all money isn't created equal
In our most recent discussion of driverless cars, I made the following assertion about Google:
Google has a lot of reasons to want to be seen as a diversified, broadly innovative technology company, rather than as a very good one-trick pony cashing in on a monopoly (possibly two monopolies depending on how you want to count YouTube). A shiny reputation helps to keep stock prices high and regulators at bay.I didn't really think of it at the time, but this concern is a point we have hit tangentially in the past and which probably deserves a bit more direct scrutiny. Investors often care a great deal about where a company's money comes from. This concern is often neither rational nor consistent and often leads companies to mislead the public about the makeup of their revenues.
Here are a couple of examples. I am going to be rather vague about some details because, you know, lawyers, but the broad outlines are both accurate and the circumstances are common enough that I could always find other cases if pressed . The first involved a financial services company that had products for customers at both ends of the economic spectrum. If you were to look at the company as an outsider or even as a new employee, you might very well assume that the two divisions were roughly equal. You might even suspect that the upscale was more profitable.
In reality a large majority of company's profits came from the low end. It turned out that the profit margin for providing services for poor people in this case was much higher. Stockholders, however, did not particularly like products that served this demographic. Also having a heavily promoted line of products for upper-class people did wonders for the stock price.
Here's another example:
The bank in question was in the middle of a very good run, making a flood of money from its credit card line, but investors kept complaining that the bank was making all that money the wrong way. This was the height of the Internet boom but the bank was booking all of these profitable accounts through old-fashioned direct mail. If it wanted to maximize its stock price, the bank needed to start booking accounts online.Put bluntly, companies will often pursue strategies or introduce products that are profit neutral or worse because these strategies and products make the companies look diversified or forward thinking or poised to take advantage of some major opportunity. Investors reward these perceptions. With this fact in mind, you can make sense of all sorts of strange business decisions.
The trouble was that (at least at the time) issuing credit cards over the Internet was a horrible idea. The problem was fraud. With direct mail, the marketer decides who to contact and has various ways to check that a customer's card is in fact going to that customer. With a website, it was the potential customers who initiated contact and a stunning number of those potential customers were identity thieves.
The Internet was an excellent tool for account management, but the big institutional investors were adamant; they wanted to see the bank booking accounts online. Faced with the choice between unhappy investors and a disastrous business move, the company came up with a truly ingenious solution: they added a feature that let people who received a pre-approved credit card card offer fill out the application online.
Just to be absolutely clear, this service was limited to people who had been solicited by the bank and based on the response rates, the people who went online were basically the same people who would have applied anyway. From a net acquisitions standpoint, it had little or no impact.
From an investor relations standpoint, however, it accomplished a great deal. Everyone who filled out one of those applications and was approved* was counted as an online acquisition. Suddenly the bank was using this metric to bill itself as one of the leading Internet providers. This satisfied the investors (who had no idea how cosmetic the change was) and allowed the bank to continue to follow its highly profitable business plan (which was actually a great deal more sophisticated than the marketing techniques of many highly-touted Internet companies).
*'pre-approved' actually means 'almost pre-approved.'
For example, Amazon is an innovative, well-run. forward-thinking company, but its P/E ratio (when it turns a profit) is often in the hundreds,* meaning the company has to be seen as being on the cusp of explosive growth. When you read about the company's online grocery service or its proposed drone-to-door deliveries** and you ask yourself how they can ever make a profit doing this on a large scale, the answer may be that they don't expect to.
* There may be some controversy over how P/E ratio is calculated for Amazon but that's a topic for another post and probably another blogger.
** I added "drone-to-door" to emphasis the distinction between that proposed technology and large cargo drones. The latter actually does make business sense but would face huge regulatory hurdles.
Wednesday, March 4, 2015
Elegant theories and headless clowns -- more bad tech reporting from the New York Times
The previously mentioned Paul Krugman piece on opera singer Jenny Lind included a link to this NYT article "adapted from The Price of Everything: Solving the Mystery of Why We Pay What We Do, by Eduardo Porter, an editorial writer for The New York Times." Krugman was criticizing the reliance on simple economic stories that don't fit the facts. Porter was telling one.
The 1980s cutoff is arbitrary and misleading. The 1880's would make more sense, though it really wasn't until the 1890s that things really took off and it has been a fairly steady stream of technological innovations since then.
Here's a brief, roughly chronological view of some of the highlights:
Disc records
Amplification
Radio
Optical sound tracks on film
Stereo
FM
LPs
HiFi
Television (which brought with it everything from Hit Parade to American Bandstand, Sullivan, Midnight Special and countless shows like this)
Cassettes
CDs
Recordable CDs
Affordable digital audio editing
And then, of course, a whole family of internet-based innovations.
The past 125 years has been one long stream of "technological disruptions" for the music industry, but most of the innovation over the past couple of decades has mainly broadened the market by increasing selection and lowering production costs. In terms of "allow[ing] the very top acts to reach a larger fan base, and thus command a larger audience and a bigger share of concert revenue," at least for the North American and European audience, the top acts have been near saturation since the Sixties. (Check out the ratings for Elvis or the Beatles on Sullivan.)
By looking at the past thirty years of advances and ignoring the previous ninety, Porter gives us a blatant example of headless clown causal reasoning, arguing that x explains the difference in A and B because x is present in A while ignoring the fact that x is also present in B. Data journalism has fully embraced the idea that two numbers briefly moving in sync constitutes a causal argument.
The phrase "elegant theory" should have set off the red flags and warning lights. Elegance in these books pretty much always means "simplistic and unrealistic." The theories are aesthetically and emotionally appealing but they just barely fit the data in their examples and they usually fall apart completely when taken out on the road.
As previously mentioned, this goes back to what George Polya called (in a quote I really need to dig up) thinking like a mathematician . Polya suggested that the default setting of most people when presented with a rule is to look for examples while the default setting of mathematicians and scientists was to look for exceptions. Mathematical ideas get a tremendous amount of press these days but very few of the people covering them think like mathematicians.
Baseball aficionados might conclude that all of this points to some pernicious new trend in the market for top players. But this is not specific to baseball, or even to sport. Consider the market for pop music. In 1982, the top 1 percent of pop stars, in terms of pay, raked in 26 percent of concert ticket revenue. In 2003, that top percentage of stars — names like Justin Timberlake, Christina Aguilera or 50 Cent — was taking 56 percent of the concert pie.Putting aside the fact that, as Krugman pointed out, we have examples of superstar musicians that predate both recording and broadcasting, this paragraph is still stunningly incomplete and comically ill-informed.
...
But broader forces are also at play. Nearly 30 years ago, Sherwin Rosen, an economist from the University of Chicago, proposed an elegant theory to explain the general pattern. In an article entitled “The Economics of Superstars,” he argued that technological changes would allow the best performers in a given field to serve a bigger market and thus reap a greater share of its revenue. But this would also reduce the spoils available to the less gifted in the business.
The reasoning fits smoothly into the income dynamics of the music industry, which has been shaken by many technological disruptions since the 1980s. First, MTV put music on television. Then Napster took it to the Internet. Apple allowed fans to buy single songs and take them with them. Each of these breakthroughs allowed the very top acts to reach a larger fan base, and thus command a larger audience and a bigger share of concert revenue.
The 1980s cutoff is arbitrary and misleading. The 1880's would make more sense, though it really wasn't until the 1890s that things really took off and it has been a fairly steady stream of technological innovations since then.
Here's a brief, roughly chronological view of some of the highlights:
Disc records
Amplification
Radio
Optical sound tracks on film
Stereo
FM
LPs
HiFi
Television (which brought with it everything from Hit Parade to American Bandstand, Sullivan, Midnight Special and countless shows like this)
Cassettes
CDs
Recordable CDs
Affordable digital audio editing
And then, of course, a whole family of internet-based innovations.
The past 125 years has been one long stream of "technological disruptions" for the music industry, but most of the innovation over the past couple of decades has mainly broadened the market by increasing selection and lowering production costs. In terms of "allow[ing] the very top acts to reach a larger fan base, and thus command a larger audience and a bigger share of concert revenue," at least for the North American and European audience, the top acts have been near saturation since the Sixties. (Check out the ratings for Elvis or the Beatles on Sullivan.)
By looking at the past thirty years of advances and ignoring the previous ninety, Porter gives us a blatant example of headless clown causal reasoning, arguing that x explains the difference in A and B because x is present in A while ignoring the fact that x is also present in B. Data journalism has fully embraced the idea that two numbers briefly moving in sync constitutes a causal argument.
The phrase "elegant theory" should have set off the red flags and warning lights. Elegance in these books pretty much always means "simplistic and unrealistic." The theories are aesthetically and emotionally appealing but they just barely fit the data in their examples and they usually fall apart completely when taken out on the road.
As previously mentioned, this goes back to what George Polya called (in a quote I really need to dig up) thinking like a mathematician . Polya suggested that the default setting of most people when presented with a rule is to look for examples while the default setting of mathematicians and scientists was to look for exceptions. Mathematical ideas get a tremendous amount of press these days but very few of the people covering them think like mathematicians.
Tuesday, March 3, 2015
Epidemiology Research
I am a big fan of Aaron Carroll, who often blogs at the incidental economist. However, in his latest New York Times column he says:
It seems like he wants to conflate that observational research = epidemiology.
There is also a difference of estimands. The trials can only assess interventions in diet and how they perform. The real (true) intake of the participants is always approximated, except perhaps in a metabolic ward. Even doubly labeled water studies need to make assumptions.
The real bug-bear of nutritional research in humans is measurement error. It is present in all studies (even trials, which are much less susceptible to bias than cohort or case-control studies). That is a lot of what we struggle with in this research area.
Now, it is true (and I agree with Aaron Carroll completely) that the trials tell us a lot of what we want to know. In a real sense we want to know how dietary interventions, as they will actually work out in reality, will change outcomes. So I share his concern that the trials seemed to be overlooked by people writing guidelines. Or, in other words, I think his main conclusion is quite sensible.
But let's not forget the observational research is critical to understanding patterns of diet that create the hypotheses and interventions that can actually be tested in trials. They also give a lot of understanding into how people consume food in the state of nature. I am never going to stop focusing on high quality data and the most rigorous possible study designs. But I think it would be wiser to represent the eco-system more completely.
On the other hand, I am not an expert in health care communications, and it might be that such broad strokes are necessary when focusing on the general public. After all, improving public health is everyone's goal, and I am happy to take a few "hits" if that is the ultimate outcome. But I think it's also a challenge to think about how to make this type of research, and the nuances in it, better understood in general.
I have a lot of thinking to do.
Most of the evidence for that recommendation has come from epidemiologic studies, which can be flawed.
Use of these types of studies happens far more often than we would like, leading to dietary guidelines that may not be based on the best available evidence. But last week, the government started to address that problem, proposing new guidelines that in some cases are more in line with evidence from randomized controlled trials, a more rigorous form of scientific research.So when have randomized controlled trials stopped being a part of epidemiology? It comes as news to me, who has done this type of work as an epidemiologist. In particular, there are threats to validity in trials as well, and a lot of smart causal inference research has looked at that as well. Trials also have concerns about cross-over, attrition, and even valid design. These elements are all part of a typical epidemiological education and are an important part of public health practice. Even things like meta-analyses, where trials (and now sometimes observational studies) are pooled are typical parts of epidemiology.
It seems like he wants to conflate that observational research = epidemiology.
There is also a difference of estimands. The trials can only assess interventions in diet and how they perform. The real (true) intake of the participants is always approximated, except perhaps in a metabolic ward. Even doubly labeled water studies need to make assumptions.
The real bug-bear of nutritional research in humans is measurement error. It is present in all studies (even trials, which are much less susceptible to bias than cohort or case-control studies). That is a lot of what we struggle with in this research area.
Now, it is true (and I agree with Aaron Carroll completely) that the trials tell us a lot of what we want to know. In a real sense we want to know how dietary interventions, as they will actually work out in reality, will change outcomes. So I share his concern that the trials seemed to be overlooked by people writing guidelines. Or, in other words, I think his main conclusion is quite sensible.
But let's not forget the observational research is critical to understanding patterns of diet that create the hypotheses and interventions that can actually be tested in trials. They also give a lot of understanding into how people consume food in the state of nature. I am never going to stop focusing on high quality data and the most rigorous possible study designs. But I think it would be wiser to represent the eco-system more completely.
On the other hand, I am not an expert in health care communications, and it might be that such broad strokes are necessary when focusing on the general public. After all, improving public health is everyone's goal, and I am happy to take a few "hits" if that is the ultimate outcome. But I think it's also a challenge to think about how to make this type of research, and the nuances in it, better understood in general.
I have a lot of thinking to do.
Monday, March 2, 2015
Defining away concerns about charter school attrition
[New information has come in and we may be making some changes to this post.]
After what seems like a long time, we are back on the bad education statistics beat. Joseph kicked things off with this post discussing some recent charter school research, particularly this paper by Angrist et al. I followed by reposting a couple of earlier pieces on attrition.
If you didn't see them when they came out, I strongly recommend you take a minute a read those two reposts (Selection effects on steroids and Selection on Spinach*). This is a big, incredibly complex story and it makes much more sense if you come in with some context.
I also want to say explicitly that I am not singling out the Angrist paper for criticism. It is, if anything, above average for the field; that's the scary part. I have a number of concerns about this study but they are all problems that you find in much, if not most of the research on charter schools.
Let's start with attrition and this passage from the paper. The first half of the paragraph mostly seems to be pretty good, except for one red flag [emphasis added].
Easier to miss but far more important is the defining of attrition as leaving the data set rather than leaving the program. This isn't necessarily wrong but it's incomplete and worrisome in at least two ways: first because it differs from what we might call the standard definition. If you Google "charter school student attrition," you will generally find stories about students leaving charter schools and moving to other schools; second because the more common definition of attrition is far more likely to cause problems that can invalidate this study.
The rest of the paragraph is more troubling.
(Perhaps it is just a coincidence but it seems that, as economists have played more and more the role of statisticians-at-large, we seem to be seeing more of these "don't worry, everything will balance out" assumptions.)
I want to be careful with the next part because as mentioned before, I'm not an expert in this field nor have I gone through the paper in great detail, but think about the following line from the paper:
"The effects of charter attendance are modeled as a function of years spent attending a charter school."
Keep in mind that we appear to have a lot of cases of charters (particularly those with the 'no-excuse' model) singling out out students who are likely to do poorly and either forcing them out of the program or encouraging them to leave voluntarily. This probably means that a lot of students who would have been low-score/high-charter-years had they stayed where they were assigned by the lottery have been shifted to the the low-score/low-charter-year category.
This isn't my biggest concern with this research -- it isn't even my second biggest -- but it is enough to potentially undermine the validity of the research.
After what seems like a long time, we are back on the bad education statistics beat. Joseph kicked things off with this post discussing some recent charter school research, particularly this paper by Angrist et al. I followed by reposting a couple of earlier pieces on attrition.
If you didn't see them when they came out, I strongly recommend you take a minute a read those two reposts (Selection effects on steroids and Selection on Spinach*). This is a big, incredibly complex story and it makes much more sense if you come in with some context.
I also want to say explicitly that I am not singling out the Angrist paper for criticism. It is, if anything, above average for the field; that's the scary part. I have a number of concerns about this study but they are all problems that you find in much, if not most of the research on charter schools.
Let's start with attrition and this passage from the paper. The first half of the paragraph mostly seems to be pretty good, except for one red flag [emphasis added].
A second potential threat to the validity of lottery-based estimates is the differential loss to follow-up between winners and losers (also called differential attrition). Students in our study are lost to follow-up if they are missing the MCAS score data we use to measure charter achievement effects. This usually happens when a student moves out of state or to a private school. Attrition can bias lottery-based estimates if different types of students are more likely to leave the sample depending on lottery results.There are a couple of fairly subtle points here (since I'm not an expert on this research you might want to dig up a copy of the original paper -- I believe mine is behind a firewall -- and check my work). The first centers around the various reasons why a student might miss one or more standardized tests. The researchers do deserve some credit for mentioning the private school option but the don't seem to quantify it, nor do they mention reasons like changing schools which are much more likely than interstate moves to interact in a problematic way.
Easier to miss but far more important is the defining of attrition as leaving the data set rather than leaving the program. This isn't necessarily wrong but it's incomplete and worrisome in at least two ways: first because it differs from what we might call the standard definition. If you Google "charter school student attrition," you will generally find stories about students leaving charter schools and moving to other schools; second because the more common definition of attrition is far more likely to cause problems that can invalidate this study.
The rest of the paragraph is more troubling.
For instance, losers might be more likely to leave than winners, and highly motivated students might be more likely to opt for private school if they lose. We therefore compare the likelihood that winners and losers have an outcome test score in our data. There are no statistically significant differences in follow-up rates in the lottery sample schools, a result documented in Appendix Table A3. It therefore seems unlikely that differential attrition has an impact on the lottery-based results.That "seems unlikely" is very hard to justify. Putting aside for a moment, the issue of definitions, you can't control for differential attrition this way. It is entirely possible for two groups to have roughly the same level of attrition and yet have the selection effects going in opposite directions. Furthermore, the kind of highly selective attrition we're talking about here is very powerful (particularly if you throw in peer effects). Even if the selective attrition is limited to one group, it is entirely possible for a statistically insignificant difference in attrition rates to led to a substantial difference in outcomes.
(Perhaps it is just a coincidence but it seems that, as economists have played more and more the role of statisticians-at-large, we seem to be seeing more of these "don't worry, everything will balance out" assumptions.)
I want to be careful with the next part because as mentioned before, I'm not an expert in this field nor have I gone through the paper in great detail, but think about the following line from the paper:
"The effects of charter attendance are modeled as a function of years spent attending a charter school."
Keep in mind that we appear to have a lot of cases of charters (particularly those with the 'no-excuse' model) singling out out students who are likely to do poorly and either forcing them out of the program or encouraging them to leave voluntarily. This probably means that a lot of students who would have been low-score/high-charter-years had they stayed where they were assigned by the lottery have been shifted to the the low-score/low-charter-year category.
This isn't my biggest concern with this research -- it isn't even my second biggest -- but it is enough to potentially undermine the validity of the research.
Sunday, March 1, 2015
Nimoy tribute on MeTV
One of these days, I would love to spend some time discussing the many clever ideas of Weigel Broadcasting. (Keep in mind, Carl Reiner has called MeTV's promos "brilliant.") The company provides a fascinating case study of a well-run business.
Unfortunately, this post is time-sensitive, so I'll limit myself to a quick DVR alert.
I particularly recommend the Man From UNCLE episode, which also features William Shatner and Werner Klemperer and is simply a lot of fun.
MeTV also ran the Star Trek episode "Amok Time" last night and dusted off this irreverent but affectionate spot.
It's the kick that sells it.
Unfortunately, this post is time-sensitive, so I'll limit myself to a quick DVR alert.
I particularly recommend the Man From UNCLE episode, which also features William Shatner and Werner Klemperer and is simply a lot of fun.
MeTV also ran the Star Trek episode "Amok Time" last night and dusted off this irreverent but affectionate spot.
It's the kick that sells it.
Friday, February 27, 2015
I was about to slam Krugman for ignoring meaningful counter-examples...
I generally like Paul Krugman a great deal, partially because I have a high tolerance for quality snark and partially because... well, let's save that for later. Sometimes though, for lack of a better description, writes like an economist. By this I (somewhat unfairly) mean that he is occasionally too quick to embrace the sweeping and aesthetically pleasing theory that collapses under scrutiny. I have mainly noticed this trend when he ventures out of econ or when he is summarizing the work of colleagues.
Recent case in point (or so I thought).
Here's my favorite example.
I believe it was in one of the plausible reasoning books that George PĆ³lya observed that, as a general principle, if you gave most people a rule they would usually start trying to think of examples; if you gave a mathematician a rule, he or she would generally start trying to think of exceptions.
At the risk of making a sweeping statement as part of an attack on sweeping statements, one of my biggest problems with economist as statistician-at-large trend (see Levitt et al.) is that so few of them think like one of PĆ³lya's mathematicians. Krugman, for all his other flaws, is the kind of writer who tends to notice exceptions.
Recent case in point (or so I thought).
[Sherwin] Rosen’s argument, more than 30 years ago, was that technology was leading to sharp increases in inequality among performers of any kind, because the reach of talented individuals was vastly increased by mass media. Once upon a time, he said, all comedians had to entertain live audiences in the Borscht Belt; some drew bigger, better-paying crowds than others, but there were limits to the number of people one comic could reach, and hence limits on the disparity in comedian incomes. In modern times, however, an especially funny guy can reach millions on TV; an especially talented band can sell records around the world; hence the emergence of a skewed income distribution with huge rewards for a few.There is undoubtedly some truth to this, but there are huge counter-examples as well, and substantial parts of the entertainment industry where the hypothesized relationships don't hold at all. I was all set to skewer Krugman over these problems when he had to go and say this:
But the more I look into this, the less I think this story works, at least for music.He then goes on to show how the theory breaks down, particularly when placed in the context of the general economy.
Here's my favorite example.
But are the big incomes of music superstars something new, or at least a late 20th-century development? Well, let’s take an example where there are pretty good numbers: Jenny Lind, the famous soprano, who toured America from 1850 to 1852.Which brings me around to that other reason I like Krugman.
Tickets at Lind’s first concert sold for an average of about 6 dollars, which seems to have been more or less typical during the tour. Adjusting for inflation, that’s the equivalent of around $180 today, which isn’t too shabby (a lot of the indie concerts I go to are $15-20, although they also make money on beer). But you also want to bear in mind that real incomes and wages were much lower, so that these were actually huge ticket prices relative to typical incomes.
Overall, Lind was paid about $350,000 for 93 concerts, or a bit less than $4,000 a concert. If we adjust for the rise in GDP per capita since then, this was the equivalent of around $2 million a concert today. In other words, to a first approximation Jenny Lind = Taylor Swift. And this was in an era not only without recordings, but without amplification, so that the size of audiences was limited by the acoustics of the halls and the performer’s voice projection.
I believe it was in one of the plausible reasoning books that George PĆ³lya observed that, as a general principle, if you gave most people a rule they would usually start trying to think of examples; if you gave a mathematician a rule, he or she would generally start trying to think of exceptions.
At the risk of making a sweeping statement as part of an attack on sweeping statements, one of my biggest problems with economist as statistician-at-large trend (see Levitt et al.) is that so few of them think like one of PĆ³lya's mathematicians. Krugman, for all his other flaws, is the kind of writer who tends to notice exceptions.
Thursday, February 26, 2015
Bondholders as Stakeholders
I agree with Dean Dad that this is a really major development -- the idea that bondholders will directly be able to act as stakeholders in higher education is a very big deal. Consider:
Now this one case could well be an outlier and this could all blow over. But it is worth thinking very carefully about how this will play out in an environment where schools are strapped for cash.
Which is where a financial issue becomes a governance issue. Suddenly, “shared” governance isn’t just shared with people on campus, or in the legislature. Now it’s shared with bondholders, and those bondholders have different priorities and varying degrees of patience. Unlike the other participants in shared governance, they may not have any particular obligation to the other parties at hand. It might not be worth their while to go for the quick kill, but that’s prudence, rather than deference. They aren’t big on deference, as a group.It also means that institutions will become subject, even more so, to all of the economic pressures of the corporate world. One of the few things that made higher education uniquely valuable was the ability to resist institutional change. It seems paradoxical that this would be the case in an organization devoted to innovation, but higher education has always been focused on the long game and not the short game.
Now this one case could well be an outlier and this could all blow over. But it is worth thinking very carefully about how this will play out in an environment where schools are strapped for cash.
Wednesday, February 25, 2015
Quick Post: Financial Advice
Just a quick hit today, from Matt Yglesias, discussing whether investment advisors should be regulated to give helpful advice:
I also think that the quote Matt includes at the top of his post is telling in a very different dimension:
It's been a quiet story, but the implications for policy are enormous.
This of course raises the question of what it is that brokers who serve the middle class — people at mass market brokerages who pick up the phone when you dial the number on your company's 401(k) site — are doing to make money. The answer is that they are earning a living marketing financial products that are profitable to their employer and disguising the marketing as advice.I think that this is entirely correct. The idea that this sort of regulation could eliminate or reduce the number of financial advisors is not surprising. It'd replace them with salespeople, which would limit the amount of potential confusion.
I also think that the quote Matt includes at the top of his post is telling in a very different dimension:
"While concerns about improper actions by investment advisors should certainly be addressed, an overly broad proposal could price professional financial advice beyond the reach of many modest income families."The theory behind things like the 401(k) is that people will be able to make better investment decisions than, for example, the state. Thus private savings would work better than, for example, social security. However, if the advice needed to be successful at saving using financial instruments is outside of the reach of the middle class (when regulated so that advisors need to act in the best interests of their clients) then it rather undermines this entire thesis.
It's been a quiet story, but the implications for policy are enormous.
CBS joins the terrestrial superstation ranks
[There's a Car 54 marathon coming up on Decades. Every episode of William Faulkner's favorite show starting March 3rd. Just wanted to get that out of the way.]
We haven't hit this one for a while so perhaps a bit of review is in order.
Back in 2008, the US finally caught up with the rest of the world and switched over to digital broadcast television. One of the many largely unreported results was that, since over-the-air broadcasters could carry multiple channels on the same signal, the satellite superstation model could be extended to terrestrial television.
At first, the field was limited to one well-respected but minor regional player called Weigel Broadcasting, which in rapid succession launched the TBS-style movie channel ThisTV and the TVLand style retro-channel MeTV. Weigel had what appeared to be no external marketing budget, instead relying on walk-ins and word-of-mouth (their internal marketing was a different story with no less an authority than Carl Reiner calling their station promos 'brilliant').
Terrestrial superstations received almost no coverage outside of trade publications and a few industry-heavy towns like Chicago. The lack of coverage was perhaps not surprising given the absence of promotion and the downscale demographics of the market, but it raised a potentially troubling issue. The broadcast television industry occupies a valuable piece of virtual real estate. The telecom industry was lobbying hard for a chance to grab that portion of the spectrum. The national press (particularly in the Northeast) was discussing the possibility of shutting down terrestrial TV while being completely unaware of what was going on in the medium.
The debate over what to do with the spectrum quickly came down to two narratives: the first was that the over the air market was tiny and rapidly shrinking and that its resources could be better used elsewhere. This argument, supported by Nielsen data, had lots of powerful friends and was widely promoted; The counterargument, supported by the market research firm GfK, was that the market had grown sharply since the conversion to digital. Under this scenario, selling of the television spectrum would kill a fledgling industry, reduce media diversification and remove a service that greatly improves the quality of life for the bottom quartile in order to slightly improve things for the top. Rajiv Sethi may have been the first major blogger to take the OTA side. Our blog also jumped in early in the debate.
(You can find a summary of the argument here. Make sure to check the comment section.)
Given the huge discrepancy between the Nielsen and GfK numbers, I suggested that we should watch what companies with high-quality proprietary data (particularly ad revenue) were doing. Two early indicators were NBC's terrestrial superstation COZI and the Fox/Weigel joint venture Movies!.
The comically inept COZI was of interest primarily because is part of the same corporate family as the cable company Comcast. Movies! was far more notable, both for quality and innovation and for the business arrangement that spawned it.
The Fox/Weigel deal was really something unusual, perhaps even at the time unique (more on that in a minute). At first glance, Weigel seemed to bring nothing to the table. Fox had the money, the stations, the library and at least as much experience putting together channels. If Fox were treating this as just another cable station, the deal would make no sense, but Movies! launched as a terrestrial superstation, and in that area Weigel had an unmatched track record.
Since then the number of terrestrial superstations has continued to grow. In addition to numerous smaller players, major studios like Sony and MGM entered the market, and now one of the biggest, smartest and most cautious major has decided to give the model a try.
The decision not only to start a MeTV style station but actually to bring in a competitor to run it is enormously telling. First, as an indication of Weigel's standing and second, as an illustration of how much the terrestrial subchannel market is seen as both distinct and important.
We can probably never say whether Nielsen or GfK got it right, but we can say that the companies with the best proprietary data seem to see a future in rabbit ears.
* CBS and Viacom are not exactly the same company these days, but they are basically owned by the same people.
We haven't hit this one for a while so perhaps a bit of review is in order.
Back in 2008, the US finally caught up with the rest of the world and switched over to digital broadcast television. One of the many largely unreported results was that, since over-the-air broadcasters could carry multiple channels on the same signal, the satellite superstation model could be extended to terrestrial television.
At first, the field was limited to one well-respected but minor regional player called Weigel Broadcasting, which in rapid succession launched the TBS-style movie channel ThisTV and the TVLand style retro-channel MeTV. Weigel had what appeared to be no external marketing budget, instead relying on walk-ins and word-of-mouth (their internal marketing was a different story with no less an authority than Carl Reiner calling their station promos 'brilliant').
Terrestrial superstations received almost no coverage outside of trade publications and a few industry-heavy towns like Chicago. The lack of coverage was perhaps not surprising given the absence of promotion and the downscale demographics of the market, but it raised a potentially troubling issue. The broadcast television industry occupies a valuable piece of virtual real estate. The telecom industry was lobbying hard for a chance to grab that portion of the spectrum. The national press (particularly in the Northeast) was discussing the possibility of shutting down terrestrial TV while being completely unaware of what was going on in the medium.
The debate over what to do with the spectrum quickly came down to two narratives: the first was that the over the air market was tiny and rapidly shrinking and that its resources could be better used elsewhere. This argument, supported by Nielsen data, had lots of powerful friends and was widely promoted; The counterargument, supported by the market research firm GfK, was that the market had grown sharply since the conversion to digital. Under this scenario, selling of the television spectrum would kill a fledgling industry, reduce media diversification and remove a service that greatly improves the quality of life for the bottom quartile in order to slightly improve things for the top. Rajiv Sethi may have been the first major blogger to take the OTA side. Our blog also jumped in early in the debate.
(You can find a summary of the argument here. Make sure to check the comment section.)
Given the huge discrepancy between the Nielsen and GfK numbers, I suggested that we should watch what companies with high-quality proprietary data (particularly ad revenue) were doing. Two early indicators were NBC's terrestrial superstation COZI and the Fox/Weigel joint venture Movies!.
The comically inept COZI was of interest primarily because is part of the same corporate family as the cable company Comcast. Movies! was far more notable, both for quality and innovation and for the business arrangement that spawned it.
The Fox/Weigel deal was really something unusual, perhaps even at the time unique (more on that in a minute). At first glance, Weigel seemed to bring nothing to the table. Fox had the money, the stations, the library and at least as much experience putting together channels. If Fox were treating this as just another cable station, the deal would make no sense, but Movies! launched as a terrestrial superstation, and in that area Weigel had an unmatched track record.
Since then the number of terrestrial superstations has continued to grow. In addition to numerous smaller players, major studios like Sony and MGM entered the market, and now one of the biggest, smartest and most cautious major has decided to give the model a try.
NEW YORK and CHICAGO – The CBS Television Stations group, a division of CBS Corporation (NYSE: CBS.A and CBS), and Weigel Broadcasting today announced plans to launch DECADES, a new national entertainment programming service for distribution across local television stations’ digital subchannels – broadcast channels that utilize a local station’s available spectrum to provide a companion to that station’s primary channel. For example, in the New York market, WCBS-TV will continue to be available digitally as Channel 2.1 and DECADES will be available as Channel 2.2. In addition to being available as an over-the-air broadcast channel, DECADES will appear on numerous local cable systems and other multichannel video programming distribution services along with the stations’ primary channels.
Utilizing a library of more than 100 classic television series, including select titles from the CBS library such as I LOVE LUCY from the 1950s, STAR TREK from the 1960s, HAPPY DAYS from the 1970s and CHEERS from the 1980s, as well as a wide selection of theatrical and made-for-television movies and footage of historical news events from the archives of CBS News and ENTERTAINMENT TONIGHT, DECADES will provide viewers with a new way to experience our shared historical and cultural past.
As the ultimate TV time machine, DECADES will differentiate itself from other subchannel programming services by varying the classic series and movies that appear on the network every day.
“DECADES is the most ambitious and creative subchannel programming service that has ever been created,” said Peter Dunn, President, CBS Television Stations. “We are thrilled to partner with Weigel Broadcasting, the leaders in this space, to make smart use of our stations’ spectrums and our companies’ considerable programming assets. This service will be a tremendous new business for CBS and all of the other stations across the country that participate, regardless of their primary network affiliation.”Even more than the Fox Movies! deal, Decades shows how much Weigel has come to be recognized as the dominant player in the terrestrial television market. As with the earlier collaboration, CBS would seem to be the one bringing everything to the table: the name, the money, the stations, the library, even expertise (keep in mind that in an earlier incarnation, CBS/Viacom* virtually invented the retro-genre in the Eighties with Nick-at-Nite, followed by TVLand).
...
DECADES will take viewers into a daily time capsule presentation of entertainment, popular culture and news. The service will feature DECADES RETROSPECTICAL (SM), a daily one-hour program that will be produced around the news events and cultural touchstones of a specific day, week or other time frame or theme. The TV series and movies presented each day will reflect that day’s theme or commemorative event.
For example, DECADES will look back at classic series such as HAPPY DAYS and its “jump the shark” episode, explain its historical significance and then broadcast that episode. Viewers will also be taken back in time to rediscover events that shaped our world, such as the assassination of President John F. Kennedy, Neil Armstrong walking on the moon, the Beatles’ U.S. debut on THE ED SULLIVAN SHOW and the birth of software and technology companies like Microsoft and Apple. DECADES will connect these events to what people were watching on television, seeing at the movies and experiencing as a nation.
The decision not only to start a MeTV style station but actually to bring in a competitor to run it is enormously telling. First, as an indication of Weigel's standing and second, as an illustration of how much the terrestrial subchannel market is seen as both distinct and important.
We can probably never say whether Nielsen or GfK got it right, but we can say that the companies with the best proprietary data seem to see a future in rabbit ears.
* CBS and Viacom are not exactly the same company these days, but they are basically owned by the same people.
Tuesday, February 24, 2015
Skimming the cream -- a history lesson from Charles Pierce
This could be the starting point for all sorts of interesting discussions, from the role of government sponsored research to the profound and ubiquitous technological advances that clustered around the end of the Nineteenth and the beginning of the Twentieth Century.
For liberal political blogger, Charles Pierce (the source of the following passage), it's another reason to object to Scott Walker's approach to higher education.
Up until the 1890's, dairy farming was a sucker's game. Milk was sold to the factories by volume; farmers could cheat by skimming the cream, or by watering down the product. Honest dairy farmers producing good milk got cheated pretty badly in this system. In 1890, however, a man named Stephen Babcock developed a simple test by which, through the use of sulfuric acid and a centrifuge, any farmer could measure the butterfat content of his milk. This caused such a boom in the dairy industry that Wisconsin did indeed become America's Dairyland. In collaboration with another scientist, Babcock also developed a method for cold-curing cheese that helped the state become so prolific at producing cheesy comestibles that people now wear mock-ups on their heads at football games. He also did some revolutionary work with cattle feed that became the basis for the development of the concept of vitamins.
Babcock did all of this because he worked for the Wisconsin Agricultural Experiment Station, which had been founded in 1883 as part of the University of Wisconsin's land-grant mission under the Morrill Act. This was a precursor to the agricultural extension services that were developed at other land-grant institutions after the passage of the Smith-Lever Act in 1914. The land-grant mission, which was to provide an education that would be useful to the public at large, dovetailed perfectly with what became known as The Wisconsin Idea -- that the boundaries of the university are the boundaries of the state, an idea that Scott Walker has dedicated himself to tossing into the wood chipper. And thus it is that butterfat undermines the very raison d'etre of Scott Walker's entire political career and the very basis of his political philosophy. QED.
Also, moo.
Monday, February 23, 2015
Driverless cars may actually be getting closer
This announcement has me intrigued.
Functionality;
Costs;
Implementation and infrastructure;
And the new technology's place in the existing technology landscape.
Most technology reporters (and I mean the vast majority) don't get these fundamental principles which leads them to more often than not get their stories backwards. In this case, the reporter, Justin Hyde, takes the attitude "wow, it has a special high-definition 3-D map when the appropriate response would've been "damn, it still needs a special high-definition 3-D map." Requiring special infrastructure, even really cool special infrastructure, is a bug, not a feature.
That said, this announcement does make me a bit more optimistic about the technology, at least in part because it didn't come from Google.
Google has a lot of reasons to want to be seen as a diversified, broadly innovative technology company, rather than as a very good one-trick pony cashing in on a monopoly (possibly two monopolies depending on how you want to count YouTube). A shiny reputation helps to keep stock prices high and regulators at bay.
Google has always been good at branding and they do have an extraordinary track record of innovation, but their really impressive advances (natural language processing, mapping, data mining) are closely related to their core business. The further away you move from search engines, the bigger the hype-to-substance ratio gets. This is nowhere more true than with driverless cars. The last round of publicity showed that the company could get as much buzz out of a cosmetic change (removing the steering wheel years after having demonstrated hands-free driving) as it did with the genuine breakthroughs of its earlier model.
Volvo's core competency is making, not only cars, but very safe cars. They have tons of relevant experience and engineering talent and a much larger stake in getting a viable product on the road. What's more, they seem more serious about getting the legal barriers out of the way. I still think that having a fully autonomous car generally available by the end of the decade is a long shot, but those odds might be getting a little better.
Today, Volvo announced a real, on-the-streets test of 100 of its self-driving cars — a first in the world, and one that will put regular owners in the seats of what it says are production-ready autonomous vehicles, by 2017.It is always risky to say "this is the right way to do this." With that in mind, the right way to talk about technology pretty much always revolves around the following:
Doing so requires far more than the 28 cameras, sensors and lasers Volvo says its system uses, along with a complex set of software rules, to tackle nearly 100 percent of all driving situations. It also required the approval of lawmakers in Sweden and Gotheberg, the city which will allow owners of these Volvos to legally cruise the streets while reading or chatting away on their phones from behind the wheel.
Making it possible for computers to understand everyday driving situations requires multiple types of radars, several cameras, a multiple-beam laser scanner in the front bumper and 12 ultrasonic sensors — the kind normally used to tell you if you're about to back into a pole. All of these are permanently linked to a special high-definition 3D map, refined GPS sensors and the local traffic control office — which can not only warn of jams, but command inattentive drivers to shut off their autopilots and drive themselves if necessary. And all of the systems have fail-safe modes and backups in case something goes wrong.
Functionality;
Costs;
Implementation and infrastructure;
And the new technology's place in the existing technology landscape.
Most technology reporters (and I mean the vast majority) don't get these fundamental principles which leads them to more often than not get their stories backwards. In this case, the reporter, Justin Hyde, takes the attitude "wow, it has a special high-definition 3-D map when the appropriate response would've been "damn, it still needs a special high-definition 3-D map." Requiring special infrastructure, even really cool special infrastructure, is a bug, not a feature.
That said, this announcement does make me a bit more optimistic about the technology, at least in part because it didn't come from Google.
Google has a lot of reasons to want to be seen as a diversified, broadly innovative technology company, rather than as a very good one-trick pony cashing in on a monopoly (possibly two monopolies depending on how you want to count YouTube). A shiny reputation helps to keep stock prices high and regulators at bay.
Google has always been good at branding and they do have an extraordinary track record of innovation, but their really impressive advances (natural language processing, mapping, data mining) are closely related to their core business. The further away you move from search engines, the bigger the hype-to-substance ratio gets. This is nowhere more true than with driverless cars. The last round of publicity showed that the company could get as much buzz out of a cosmetic change (removing the steering wheel years after having demonstrated hands-free driving) as it did with the genuine breakthroughs of its earlier model.
Volvo's core competency is making, not only cars, but very safe cars. They have tons of relevant experience and engineering talent and a much larger stake in getting a viable product on the road. What's more, they seem more serious about getting the legal barriers out of the way. I still think that having a fully autonomous car generally available by the end of the decade is a long shot, but those odds might be getting a little better.
Friday, February 20, 2015
Checking in with MovieBob
I've been working on some video projects lately and putting quite a bit of thought into what makes a video podcast good. This has given me another excuse to spend too much time going through Bob Chipman videos. Chipman, a.k.a. MovieBob, has the obsessive love for and knowledge of pop culture that marks the ultimate nerd, but unlike, say, virtually all of the writers for the Onion's A.V. Club, he somehow has managed to maintain a sense of perspective on the subject accompanied by a refreshing amount of common sense.
In addition to keeping his sense of perspective about the fan-boy fodder, Chipman also does the same with the business of entertainment. He understands how things like intellectual property and antitrust laws...
marketing seasons...
bad accounting...
and technical limitations can affect our culture in subtle and interesting ways.
Chipman is also displays that same sense of perspective and common sense when discussing more controversial issues.
In addition to keeping his sense of perspective about the fan-boy fodder, Chipman also does the same with the business of entertainment. He understands how things like intellectual property and antitrust laws...
marketing seasons...
bad accounting...
and technical limitations can affect our culture in subtle and interesting ways.
Chipman is also displays that same sense of perspective and common sense when discussing more controversial issues.
Thursday, February 19, 2015
I don't think you want to go with the "handful" defense
Before we go on, a quick caveat. There is tremendous variation in charter school models and philosophies. That's a big part of the story below and the reporter does a poor job addressing it. I can't say for certain, but I suspect that most of the worst offenders in the story follow the popular "no excuses" model.
From the New York Times:
Fortunately, in my experience, most administrators are too ethical to rely on these methods. Unfortunately, we have started setting up a system of incentives that encourage unethical behavior and if we continue, that balance will shift.
From the New York Times:
The Advocates for Children report cites complaints from parents who said their children had been suspended from charter schools over minor offenses such as wearing the wrong shoes or laughing while serving detention. Ultimately, though, the group said the main issue was legal.I know I've made this point before but it bears repeating: excessively harsh disciplinary policies can make incompetent administrators look good while taking a horrible toll on kids. By locking out or chasing away the kids they can't handle (who also tend to be the kids who most need our help), administrators can pump up virtually all of a school's metrics.
Half of the policies examined by Advocates for Children let charter schools suspend or expel students for being late or cutting class — punishments the group said violated state law. At three dozen schools, there were no special rules covering the suspension or expulsion of children with disabilities, which the group said violated federal law. And in 25 instances, charter schools could suspend students for long periods without a hearing, which the group said violated the United States and New York State Constitutions, as well as state law.
James D. Merriman, chief executive of the New York City Charter School Center, an advocacy group for charter schools, questioned how frequently the incidents cited by Advocates for Children occur.
“No one can disagree that those policies that do not fully meet applicable law should be amended,” he said in an email. “But it is tremendously unfair to suggest, as A.F.C. does, that a handful of one-sided anecdotes compiled over a long time are any evidence that charter schools are wholesale violating civil rights laws.”
Fortunately, in my experience, most administrators are too ethical to rely on these methods. Unfortunately, we have started setting up a system of incentives that encourage unethical behavior and if we continue, that balance will shift.
Subscribe to:
Posts (Atom)