Wednesday, November 26, 2014

It's not a question of being too positive or negative but of being wrong in both directions

Given some recent discussions, I perhaps ought to go back and clarify my position on Google's driverless cars. The Google part is important. Lots of companies, particularly big auto makers like GM and Nissan, are seriously pursuing this research. However, when you read a news account about autonomous vehicles, most of the time it's a story about Google which is troublesome for at least two reasons: first because there are some big concerns that are particularly applicable to Google's approach; and second because Google has a way of playing to the worst tendencies in tech reporters. The result is a standard narrative that manages to get both the pros and the cons wrong.

The official account goes something like this: from a technological standpoint, the Google driverless car is virtually good-to-go. There is every reason to expect you will be able to buy one in a couple of years; the only clouds on this horizon are concerns with safety and, more importantly, regulation. This version is extremely popular. It is regularly reported in the New York Times and the Wall Street Journal. It also gets both the pros and cons wrong.

Though it's too early to say for certain, safety appears to be the one non-issue for this technology. All transportation carries an element of risk, but based on a pretty good sample of road tests, that risk appears to be considerably smaller for Google's self-driving cars than it is for other ways of getting around.

Regulation is only a slightly greater concern We have had well over a century to work out the problems associated with insuring and regulating a wide variety of high-speed and potentially very dangerous vehicles. The idea that a viable, highly anticipated, and, relatively speaking, very safe technology will be kept off the market due to legal issues not just in the United States but in Europe and Asia is simply not believable.

So, if the concerns are not safety and regulation, what are they?

As previously discussed many times, much of the more gee-whiz reporting has been based on the idea that autonomous cars will quickly reach 100% adoption. As unrealistic as that assumption is, it pales in comparison to another jump tech reporters seem to have made.

Based on this Slate article, it appears that Google's approach to fully autonomous cars requires a specialized and highly expensive data infrastructure, specifically a collection of incredibly detailed maps. In order to compile this level of data, dedicated vehicles with human drivers have to travel the roads in question multiple times. What's more, the process needs to be constantly repeated to keep the data up-to-date. Road construction, new houses, all sorts of things need to be taken into account by the system.

The primary advantage of automobiles over other forms of transportation is their flexibility. A car can go pretty much anywhere you want. You can even decide on a new destination while traveling. In order to be viable, new automotive technology needs to keep that flexibility. Apparently Google's current approach means it would take a prohibitive amount of time and money to map out more than a tiny fraction of the nation's roads. Put bluntly, if this is true, given these infrastructure costs and be wide array of transportation alternatives, the Google's autonomous cars will never be viable in its present form.

Caveats are important here. It is entirely possible that Google is on the verge of a breakthrough that will allow its cars to operate off of existing Google maps. That would come close to making the technology viable. For all I know, the company could be preparing a press release as I write this. If this is the case, I'll pen a sheepish retraction then call friends and family members who don't drive and share the good news.

For now, though, this appears to be a very difficult technical nut to crack and, rather than showing signs of progress, Google seems to be trying to divert attention from the problem. You'll notice that their latest highly touted 'advance' was actually a step down in this respect, going from actual road tests to far less demanding closed tracks.


Tuesday, November 25, 2014

New side project up at Amazon.

A few years ago, I developed this checkers variant based on non-transitive relationships to help my students get some experience working with the concept. I'll talk more about the game and its development later, but in the meantime, here's a link to the webstore at Amazon.




For another example of non-transitive play, check this out. If nothing else, it will give you something to talk about if you ever meet Warren Buffet.

This isn't about Ferguson, but it may be the most relevant thing you'll read on the subject.

This is an excellent time to go back and reread "Against Law, For Order" by Mike Konczal. Since this essay appeared in April of 2012, we've seen the acquittal of George Zimmerman, the grand jury ruling on Darren Wilson and any number of additional incidents that support Konczal's troubling but convincing argument.

More bad behavior from your friendly neighborhood cable company

There's another flight going on between Viacom and a cable provider. Suddenlink, a major provider for much of the middle of the country, has recently dropped all of by a calms basic cable channels. This includes big names such as MTV, Nickelodeon, TVland, and Comedy Central. In Their place Suddenlink has scheduled some decidedly second-tier alternatives. Fans of Jon Stewart now have to make do with Jon Lovitz.

The story hasn't gotten a lot of attention (as often happens when you're on Central Time), but it's worth digging into if you're trying to keep up with the media landscape. It also ties into our ongoing rabbit ears thread.

Having lost most of the cable channels he regularly watched, a friend of mine recently called up Suddenlink and tried to downgrade his service. If you've been following the news you probably know what's coming next. He was immediately referred to a "specialist" who spent the next half hour badgering him ("why don't you want to get the best deal?").

At one point my friend (who has suffered through many of my tirades on the subject) said he was thinking about going over to an antenna. That was largely a bluff -- between terrain and distance to a broadcast tower, he probably wouldn't get very good reception unless he put up quite an antenna -- but the response was interesting. The specialist told him that going to over-the-air television would mean giving up HD.

Like I said, my friend had heard more on this topic than a reasonable person would care to so he knew this simply was not true -- not only can you get HD over the air; the quality is often better than what you get from cable -- so he challenged the company rep on this point and got him to back down to a "I'm pretty sure you can't get HD." My friend still didn't get them to accept the downgrade but he did get a rate reduction, which counts as victory when dealing with a cable company.

This ties into perhaps the most important point in these terrestrial TV stories. Competition is good but it's not not good enough by itself. When American television joined the rest of the world and went digital,the market should have become more competitive but years after the conversion, cable and satellite companies are still able to act like near-monopolies in large part because of asymmetry of information.

I've argued that digital over-the-air television is a great technology that more people ought to be using, but it may turn out to have the most impact on those stick with cable. Dealing with Comcast et al. will be much easier when the companies start facing more market pressure.

Monday, November 24, 2014

Megan and Mark are in synch

This is Joseph.

As a follow-up to Mark's post, Megan McArdle has this great point:
If the left-wing MSM is indeed biased against you, then your strategy needs to take that into account. Do you have a plan for compelling the left-wing MSM to treat you fairly? If not, then you should not settle upon a course of action that would work, if only this fact were not true. You don't launch your cavalry regiment against a Panzer battalion on the grounds that you could beat the Germans if only they didn't have all those darned tanks.
In other words, at some point an optimal strategy involves accepting the world as it is and not complaining that it isn't the best of all possible worlds.   The ability to develop realistic strategies in the face of the "facts on the ground" is a key skill in many contexts: political, military, and even business. 

Friday, November 21, 2014

Thoughts on the coming storm

From a text exchange I had on election night
The press has gone from
"The Republicans are the responsible party"
To
"Both parties are irresponsible"
To
"The Republicans will start being responsible after they win"
To
Whatever they are going to say after the impeachment.
[voice recognition errors corrected.]

This must be an interesting time to be a political scientist or anyone studying the way institutions form, function and fail.

The  Republican party seems locked into a course that defies conventional political explanation. I don't see any way that this fight over this issue is a winning move for the GOP. I am inclined to agree with Josh Marshall's analysis:
It all adds up to an intense and likely toxic campaign fracas in which a lot of people will have a unique and intense motivation to vote. That will apply to people on both sides of course. But the anti-immigration voters vote consistently almost every cycle. And as intense as your animus is toward undocumented immigrants, it's hard for it to compare to the motivation of voters who directly know someone who will be affected. And that latter group has far more 'drop-off' or occasional voters.

This isn't getting mentioned a lot right now. But behind the headlines I suspect it's one of the key reasons Republican elites are upset that this might happen: because it's an electoral grenade dropped right into the heart of the 2016 campaign.
Of course, the standard line at this point is to say something about the leaders of the party losing control of the base, but I don't buy that -- at least not in the way it is generally framed. For one thing, the underlying political philosophy of the base and the leaders doesn't seem that different, and where there are differences, they seem to mostly come from the base actually believing the message crafted by the party elites.

Keeping in mind that they decisively won the last election, the Republicans still have big problems with information and coordination. That makes it more difficult for the party to make decisive rational moves that promote its self-interest and instead leaves it inclined to seek catharsis. Shut down and impeachment are about emotional release. The challenge for the party leadership is convincing their followers that there's something more important than that.

Thursday, November 20, 2014

Other than stem cells...

What are the most notable examples of regulation holding back new technology? There has been a lots of talk recently about encouraging innovation through deregulation zones. The idea being that, for example, having a city with no regulation on drones will spur a great deal of research into the technology. On one level, this does make a certain amount of sense. The easier it is to do research, the more research we expect to see.

That said, other than studies with human subjects (where the rules really can have a dampening effect), I can't think of an area where regulations are clearly having a big negative impact on research. When a technology is promising and well-funded (as with drones), companies don't seem to have that much trouble working with the rules.

I assume I'm missing some obvious example. Any ideas?

Wednesday, November 19, 2014

"Duct tape and string"

Or as we used say back in the hills, spit and bailing wire.

From James Kwak's recent piece on United Airlines:
There are two lessons to be drawn from these entirely unexceptional examples of air travel gone wrong. One is that United’s computer systems don’t work — for the same reasons that many large companies’ core systems don’t work. The overnight unbooking and rebooking was probably a computer error, and in any case United had no way of rolling back all the automated changes to its reservation system. The automated cancellation of my return flight was either an incompetent customer service representative who didn’t preserve my return reservation when I asked her to, or a computer system that didn’t give her any way of preventing the cancellation. I was downgraded from first class because some marketing genius at United decided to add a new upsell feature to the website — but no one bothered to extend the legacy system they use behind the scenes to capture the new data from the ticket sales process. (This is a common problem with enterprise software these days: companies build new features in their websites but can’t integrate those features properly with their core processing systems.) All of this just reinforces a point I’ve made several times before: the computer systems holding together the world’s largest companies are held together by duct tape and string.
I've got at least a couple of posts I'd like to write on the how bad this side of the business often is. Having seen some of these systems up close, I'm surprised things don't crash and burn more often.

Tuesday, November 18, 2014

A subtle issue with standardized tests

This is Joseph.

Dean Dad has a nice piece on assessment.  A part of it that jumped out was:

Johnson’s argument is subtle enough that most commenters seemed to miss it.  In a nutshell, he argues that subjecting existing instruction to the assessment cycle will, by design, change the instruction itself.  Much of the faculty resistance to assessment comes from a sense of threatened autonomy.  Johnson addresses political science specifically, noting that it’s particularly difficult to come up with content-neutral measures in a field without much internal consensus, and with factions that barely speak to each other. 

He’s right, though it may be easier to grasp the point when applied to, say, history.  There’s no single “Intro to History” that most would agree on; each class is the history of something.  The ‘something’ could be a country, a region, a technology, an idea, an art form, or any number of other things, but it has to be something specific.  Judging a historian of China on her knowledge of colonial America would be easy enough, but wouldn’t tell you much of value.  If a history department finds itself judged on “scores” based on a test of the history of colonial America, then it can either resign itself to lousy scores or teach to the test.
This means that the design of standardized test is crucially important if students and/or teachers are going to be evaluated on them.  For some subjects, e.g. basic math, this may be less controversial but it still involves making choices about what the emphasis will be.  A perfect test is like a perfect teachers -- neither beast really exists in nature. 

But this is critically important for high stakes tests, because what is taught cannot help but be influenced by the test.  If history questions on the high stakes tests are all focused on colonial America, guess what the history section of classes will look like.  In some sense that is okay, insofar as we have a broad consensus as to what should be taught.  But it does make the content of the tests a matter of public policy and concern as much as any other aspect of school instruction.

Monday, November 17, 2014

James Boyle's devastating take down of Robert Bork

What makes this piece so effective is Boyle's refusal to dismiss Bork as a crank or a charlatan. Boyle instead insists on treating Bork as an important figure in conservative thought. It would have been easy to lapse into mockery, but by starting from the explicit assumption that Bork's ideas are worth taking seriously, Boyle is left with an obligation to examine them in painful detail.

From A Process of Denial: Bork and Post-Modern Conservatism

by James Boyle

With this range of defects it is hardly surprising that Mr. Bork chose to shift his ground somewhat. In The Tempting of America he argues that the understanding of the public at the time the Constitution was ratified, rather than the intent of its original authors, should determine its meaning. There is obviously a price to pay for making this change. The best thing about the intent of the framers was that it appealed to the unreflective idea that a document must always mean exactly what its authors meant it to -- no more and no less. The practitioners of original intent can claim with superficial plausibility that their method is the one "natural" way to read the text. They can even claim that we often (though not always) read other legal documents this way -- trying to determine what Congress, or the judge, or the administrator meant by this word or that phrase. Original understanding has less unreflective appeal. Precisely because it is a more sophisticated notion of interpretation, it sacrifices the idea that this is the only credible way to read a text (what about what the words mean out of context, or what the author meant?) the appeal to everyday practice and perhaps even the claim that this is the way we read other legal documents.

This problem is a particularly acute one for Mr. Bork. Throughout The Tempting Of America he explicitly connects his struggles to those going on within other disciplines. As well he might. Most disciplines seem to have rejected the idea that the text can only be read to mean what the author intended. Literary critics and historians have added other methods of reading. How would the text have been understood by its audience at the moment that it was written? How would an audience today understand it? Can the text be illuminated by evidence of the author's subconscious desires or conflicts? How does the text read if we take it as an a-contextual attempt at philosophical argument?

These other methods are referred to collectively (and a little pretentiously) as "the reader's revolution against the author." They represent everything that Mr. Bork finds most reprehensible in today's scholarship. He quotes approvingly a letter from intellectual historian, Gertrude Himmelfarb attacking this impermissible openness to other methods of interpretation. "Any methodology becomes permissible (except of course, the traditional one), and any reading of the texts becomes legitimate (except, of course, that of the author)." (p. 137) If Mr. Bork was still claiming that constitution meant what its authors intended, this would be all well and good. But the trouble with Mr. Bork's revamped and sophisticated version of originalism is that it can no longer appeal to the romantic idea that the imperial will of the author must govern the text. "The search is not for a subjective intention." (p. 144) Instead, he has handed over interpretive competence to the historically located readers of the constitution. For reasons we can only speculate about, he has shifted ultimate interpretive authority from the Framers of the Constitution to the "public of that time." Mr. Bork has joined the reader's revolution.

As I pointed out before, this switch is a costly one for Mr. Bork. To the initial cost of having been seen to adopt the very same methodology so often criticised by conservatives in other academic disciplines, one also has to add the cost of having been seen to change from one dogmatically asserted position to another. Mr. Bork obviously feels this one particularly strongly because he denies having done it. Though he described himself during the hearings as "a judge with an original intent philosophy"(61) and argued in print that "original intent is the only legitimate basis for constitutional decision-making",(62) he says in The Tempting of America that "[n]o even moderately sophisticated originalist" believes the Constitution should be governed by "the subjective intent of the Framers." (p.218) He suggests that no-one could ever have held such a belief, because it would necessarily mean that the secretly held beliefs of the Framers could change the meaning of the document. Thus all (moderately sophisticated) originalists must have believed in original understanding all along. This seems like a red herring. There are many varieties of intentionalism and many varieties of "reader-controlled" interpretation. But allowing the intention of the author to control interpretation is fairly obviously not the same thing as allowing the understanding of the reader to control. Expanding the definition of intentionalism does not turn it into the philosophy of original understanding. The `intention of the Framers and ratifiers' is not the same as `the understanding of the American people at the time.' Mr. Bork seems to find it hard to admit the change.

The most interesting example of Mr. Bork's scholarly method is the point in The Tempting of America he takes sections from his 1986 article The Constitution, Original Intent, and Economic Rights(63) which, as one might suspect from the title, defends original intent, and uses those sections to defend original understanding. At first glance, it appears that he does this by finding the words "original intent" wherever they appear in the article, and simply replacing them by "original understanding." Chunks of text which had reproved Paul Brest with failing to understand that the original intent determines the meaning of the 14th Amendment, are edited, expanded upon, a new philosophy of interpretation inserted. With a quick change of key words they can become reproofs to Paul Brest for failing to understand that original understanding determines the meaning of the 14th Amendment.(64) Even the same counterarguments can be pressed into service. In 1986 for example, "[t]here is one objection to intentionalism that is particularly tiresome. Whenever I speak on the subject someone invariably asks: "But why should we be ruled by men long dead?"(65) In 1990, Mr. Bork finds that "[q]uite often, when I speak at a law school on the necessity of adhering to the original understanding, a student will ask, "But why should we be ruled by men who are long dead." (170) In the era of the word processor, this kind of "search and replace" jurisprudence has its attractions. Still, both the interpretive criteria and the identity of the `dead men' has changed, and Mr. Bork seems uneasy with that fact.(66)


Saturday, November 15, 2014

One of these days I'm going to do a post on genres as fitness landscapes

In the meantime, here's a completely unexpected but surprisingly effective reworking.





Friday, November 14, 2014

What do stock buyback actually do?

Barry Ritholtz passes along an interesting thought from Aswath Damodaran, a professor at New York University.
Before a company calls for a stock buyback, it has risky assets (its operating business) and riskless assets (cash). After the buyback, the company has less of its riskless asset (cash) but also has fewer outstanding shares.

Hence, we end up with a somewhat riskier stock. Damodaran argues, rationally, that a buyback by an all-equity funded company should be a value-neutral transaction. In other cases, the shift should be reflected in by assigning the company a somewhat lower price-earnings ratio.
I don't know enough to comment intelligently on this claim, but it does seem to indicate that, as with so many other stories, the impact of buybacks is considerably more complicated than the experts on CNBC would have you believe,



Thursday, November 13, 2014

Fixing Common Core (or at least a small part thereof) over at the teaching blog

With a nod to David Coleman, last week I did a post called "Deconstructing Common Core" focusing on the homework problems going out under the Common Core banner.


[The generally unproductive question of what is and isn't Common Core comes up frequently. Hopefully, having an actual copyright notice will keep us from wasting any more time on the subject.]

I've become increasing concerned about the direction of mathematics education. Here's a big part of the reason:
I volunteer a couple of times a week to help a group that tutors kids from urban schools. My role is designated math guy. I go from table to table helping kids with the more challenging homework problems.

Recently, I have noticed a pattern in helping with Common Core problems. First I explained them to the students, then I explained them to the tutors.

That may be the most noticeable difference between the mathematics of Common Core and the new math of the 60s. In the summer of love, an advanced degree in mathematics or engineering was sufficient to understand an elementary school student's homework. These days, the tutors with math backgrounds often find themselves more confused than their less analytic counterparts since what they know about solving the problem seems to have nothing to do with what the assignment asks for.

To follow a Common Core worksheet, you really need to have a little knowledge of the underlying pedagogical theories. Unfortunately, if you have more than a little knowledge, you'll find these worksheets extraordinarily annoying because, to put it bluntly, much of what you see was produced by people who had a very weak grasp of the underlying concepts.
I thought it might be of interest to walk through the process of 'fixing' these problems, showing how, with a few changes, these confusing and ineffective problems could be greatly improved.

I used an example of a Common Core problem that went viral a while back.


Here is my proposed fix (which was anticipated by at least one of our regulars).

James Kwak does a valuable service...

...and states the obvious.
The value of a company is supposed to be the discounted present value of its expected future cash flows. Actually, the value of a company is the discounted present value of its expected future cash flows. So it follows that a breakup should only create value for shareholders if it increases future cash flows or lowers the discount rate. Most breakups don’t obviously do either.
This may seem to border on tautology -- "of course, that's the value of a company" -- but if you follow the business page regularly you'll routinely run into strategies and initiatives that make no sense given this definition. Sometimes these decisions are justified in terms of stock price. Other times, flavor-of-the-day notions like disruption are invoked. Occasionally, there is no excuse at all;

Unless you've logged some time with a few major corporations, you can't imagine how much time and money is wasted on unadulterated bullshit largely because C-level executives lose sight of the obvious.



Wednesday, November 12, 2014

Annals of heroic inference

This is Joseph.

Via Andrew Gelman, we get this gem:
One consequence of this is that the number of respondents who report that they are not citizens yet vote or are registered to vote is quite small in absolute terms: in 2010, for example, only 13 respondents — not 13 percent, but 13 out of 55,400 respondents — reported that they were not citizens, yet had voted. Given the ever-present possibility of respondent or coder error, it takes a bit of hubris to draw strong conclusions about the behavior of non-citizens from such small numbers.
Yes, it is very hard to determine characteristics of very rare groups.  For one thing, it's unlikely that you know much about the underlying source population.  So I think the authors are right that it is going to hard to say much about this group, given this instrument.