Friday, November 28, 2014

A recent exchange I had with my iPhone

You know those ads where people have those amazing Turing-certified conversations with Siri.








Here is what one of my recent conversations with Siri sounded like.


Me: "Call _____ home."

Siri: "There's no home number for _____. Would you like me to use mobile instead?"

Me: "Call _____ home."

Siri: "Which phone number for _____?"

Me: "Call _____ home."

Siri: "Calling _____ home."



Thursday, November 27, 2014

"As God as my witness..." is my second favorite Thanksgiving episode line [Repost]



If you watch this and you could swear you remember Johnny and Mr. Carlson discussing Pink Floyd, you're not imagining things. Hulu uses the DVD edit which cuts out almost all of the copyrighted music. .

As for my favorite line, it comes from the Buffy episode "Pangs" and it requires a bit of a set up (which is a pain because it makes it next to impossible to work into a conversation).

Buffy's luckless friend Xander had accidentally violated a native American grave yard and, in addition to freeing a vengeful spirit, was been cursed with all of the diseases Europeans brought to the Americas.

Spike: I just can't take all this mamby-pamby boo-hooing about the bloody Indians.
Willow: Uh, the preferred term is...
Spike: You won. All right? You came in and you killed them and you took their land. That's what conquering nations do. It's what Caesar did, and he's not goin' around saying, "I came, I conquered, I felt really bad about it." The history of the world is not people making friends. You had better weapons, and you massacred them. End of story.
Buffy: Well, I think the Spaniards actually did a lot of - Not that I don't like Spaniards.
Spike: Listen to you. How you gonna fight anyone with that attitude?
Willow: We don't wanna fight anyone.
Buffy: I just wanna have Thanksgiving.
Spike: Heh heh. Yeah... Good luck.
Willow: Well, if we could talk to him...
Spike: You exterminated his race. What could you possibly say that would make him feel better? It's kill or be killed here. Take your bloody pick.
Xander: Maybe it's the syphilis talking, but, some of that made sense.

Wednesday, November 26, 2014

It's not a question of being too positive or negative but of being wrong in both directions

Given some recent discussions, I perhaps ought to go back and clarify my position on Google's driverless cars. The Google part is important. Lots of companies, particularly big auto makers like GM and Nissan, are seriously pursuing this research. However, when you read a news account about autonomous vehicles, most of the time it's a story about Google which is troublesome for at least two reasons: first because there are some big concerns that are particularly applicable to Google's approach; and second because Google has a way of playing to the worst tendencies in tech reporters. The result is a standard narrative that manages to get both the pros and the cons wrong.

The official account goes something like this: from a technological standpoint, the Google driverless car is virtually good-to-go. There is every reason to expect you will be able to buy one in a couple of years; the only clouds on this horizon are concerns with safety and, more importantly, regulation. This version is extremely popular. It is regularly reported in the New York Times and the Wall Street Journal. It also gets both the pros and cons wrong.

Though it's too early to say for certain, safety appears to be the one non-issue for this technology. All transportation carries an element of risk, but based on a pretty good sample of road tests, that risk appears to be considerably smaller for Google's self-driving cars than it is for other ways of getting around.

Regulation is only a slightly greater concern We have had well over a century to work out the problems associated with insuring and regulating a wide variety of high-speed and potentially very dangerous vehicles. The idea that a viable, highly anticipated, and, relatively speaking, very safe technology will be kept off the market due to legal issues not just in the United States but in Europe and Asia is simply not believable.

So, if the concerns are not safety and regulation, what are they?

As previously discussed many times, much of the more gee-whiz reporting has been based on the idea that autonomous cars will quickly reach 100% adoption. As unrealistic as that assumption is, it pales in comparison to another jump tech reporters seem to have made.

Based on this Slate article, it appears that Google's approach to fully autonomous cars requires a specialized and highly expensive data infrastructure, specifically a collection of incredibly detailed maps. In order to compile this level of data, dedicated vehicles with human drivers have to travel the roads in question multiple times. What's more, the process needs to be constantly repeated to keep the data up-to-date. Road construction, new houses, all sorts of things need to be taken into account by the system.

The primary advantage of automobiles over other forms of transportation is their flexibility. A car can go pretty much anywhere you want. You can even decide on a new destination while traveling. In order to be viable, new automotive technology needs to keep that flexibility. Apparently Google's current approach means it would take a prohibitive amount of time and money to map out more than a tiny fraction of the nation's roads. Put bluntly, if this is true, given these infrastructure costs and be wide array of transportation alternatives, the Google's autonomous cars will never be viable in its present form.

Caveats are important here. It is entirely possible that Google is on the verge of a breakthrough that will allow its cars to operate off of existing Google maps. That would come close to making the technology viable. For all I know, the company could be preparing a press release as I write this. If this is the case, I'll pen a sheepish retraction then call friends and family members who don't drive and share the good news.

For now, though, this appears to be a very difficult technical nut to crack and, rather than showing signs of progress, Google seems to be trying to divert attention from the problem. You'll notice that their latest highly touted 'advance' was actually a step down in this respect, going from actual road tests to far less demanding closed tracks.


Tuesday, November 25, 2014

New side project up at Amazon.

A few years ago, I developed this checkers variant based on non-transitive relationships to help my students get some experience working with the concept. I'll talk more about the game and its development later, but in the meantime, here's a link to the webstore at Amazon.




For another example of non-transitive play, check this out. If nothing else, it will give you something to talk about if you ever meet Warren Buffet.

This isn't about Ferguson, but it may be the most relevant thing you'll read on the subject.

This is an excellent time to go back and reread "Against Law, For Order" by Mike Konczal. Since this essay appeared in April of 2012, we've seen the acquittal of George Zimmerman, the grand jury ruling on Darren Wilson and any number of additional incidents that support Konczal's troubling but convincing argument.

More bad behavior from your friendly neighborhood cable company

There's another flight going on between Viacom and a cable provider. Suddenlink, a major provider for much of the middle of the country, has recently dropped all of by a calms basic cable channels. This includes big names such as MTV, Nickelodeon, TVland, and Comedy Central. In Their place Suddenlink has scheduled some decidedly second-tier alternatives. Fans of Jon Stewart now have to make do with Jon Lovitz.

The story hasn't gotten a lot of attention (as often happens when you're on Central Time), but it's worth digging into if you're trying to keep up with the media landscape. It also ties into our ongoing rabbit ears thread.

Having lost most of the cable channels he regularly watched, a friend of mine recently called up Suddenlink and tried to downgrade his service. If you've been following the news you probably know what's coming next. He was immediately referred to a "specialist" who spent the next half hour badgering him ("why don't you want to get the best deal?").

At one point my friend (who has suffered through many of my tirades on the subject) said he was thinking about going over to an antenna. That was largely a bluff -- between terrain and distance to a broadcast tower, he probably wouldn't get very good reception unless he put up quite an antenna -- but the response was interesting. The specialist told him that going to over-the-air television would mean giving up HD.

Like I said, my friend had heard more on this topic than a reasonable person would care to so he knew this simply was not true -- not only can you get HD over the air; the quality is often better than what you get from cable -- so he challenged the company rep on this point and got him to back down to a "I'm pretty sure you can't get HD." My friend still didn't get them to accept the downgrade but he did get a rate reduction, which counts as victory when dealing with a cable company.

This ties into perhaps the most important point in these terrestrial TV stories. Competition is good but it's not not good enough by itself. When American television joined the rest of the world and went digital,the market should have become more competitive but years after the conversion, cable and satellite companies are still able to act like near-monopolies in large part because of asymmetry of information.

I've argued that digital over-the-air television is a great technology that more people ought to be using, but it may turn out to have the most impact on those stick with cable. Dealing with Comcast et al. will be much easier when the companies start facing more market pressure.

Monday, November 24, 2014

Megan and Mark are in synch

This is Joseph.

As a follow-up to Mark's post, Megan McArdle has this great point:
If the left-wing MSM is indeed biased against you, then your strategy needs to take that into account. Do you have a plan for compelling the left-wing MSM to treat you fairly? If not, then you should not settle upon a course of action that would work, if only this fact were not true. You don't launch your cavalry regiment against a Panzer battalion on the grounds that you could beat the Germans if only they didn't have all those darned tanks.
In other words, at some point an optimal strategy involves accepting the world as it is and not complaining that it isn't the best of all possible worlds.   The ability to develop realistic strategies in the face of the "facts on the ground" is a key skill in many contexts: political, military, and even business. 

Friday, November 21, 2014

Thoughts on the coming storm

From a text exchange I had on election night
The press has gone from
"The Republicans are the responsible party"
To
"Both parties are irresponsible"
To
"The Republicans will start being responsible after they win"
To
Whatever they are going to say after the impeachment.
[voice recognition errors corrected.]

This must be an interesting time to be a political scientist or anyone studying the way institutions form, function and fail.

The  Republican party seems locked into a course that defies conventional political explanation. I don't see any way that this fight over this issue is a winning move for the GOP. I am inclined to agree with Josh Marshall's analysis:
It all adds up to an intense and likely toxic campaign fracas in which a lot of people will have a unique and intense motivation to vote. That will apply to people on both sides of course. But the anti-immigration voters vote consistently almost every cycle. And as intense as your animus is toward undocumented immigrants, it's hard for it to compare to the motivation of voters who directly know someone who will be affected. And that latter group has far more 'drop-off' or occasional voters.

This isn't getting mentioned a lot right now. But behind the headlines I suspect it's one of the key reasons Republican elites are upset that this might happen: because it's an electoral grenade dropped right into the heart of the 2016 campaign.
Of course, the standard line at this point is to say something about the leaders of the party losing control of the base, but I don't buy that -- at least not in the way it is generally framed. For one thing, the underlying political philosophy of the base and the leaders doesn't seem that different, and where there are differences, they seem to mostly come from the base actually believing the message crafted by the party elites.

Keeping in mind that they decisively won the last election, the Republicans still have big problems with information and coordination. That makes it more difficult for the party to make decisive rational moves that promote its self-interest and instead leaves it inclined to seek catharsis. Shut down and impeachment are about emotional release. The challenge for the party leadership is convincing their followers that there's something more important than that.

Thursday, November 20, 2014

Other than stem cells...

What are the most notable examples of regulation holding back new technology? There has been a lots of talk recently about encouraging innovation through deregulation zones. The idea being that, for example, having a city with no regulation on drones will spur a great deal of research into the technology. On one level, this does make a certain amount of sense. The easier it is to do research, the more research we expect to see.

That said, other than studies with human subjects (where the rules really can have a dampening effect), I can't think of an area where regulations are clearly having a big negative impact on research. When a technology is promising and well-funded (as with drones), companies don't seem to have that much trouble working with the rules.

I assume I'm missing some obvious example. Any ideas?

Wednesday, November 19, 2014

"Duct tape and string"

Or as we used say back in the hills, spit and bailing wire.

From James Kwak's recent piece on United Airlines:
There are two lessons to be drawn from these entirely unexceptional examples of air travel gone wrong. One is that United’s computer systems don’t work — for the same reasons that many large companies’ core systems don’t work. The overnight unbooking and rebooking was probably a computer error, and in any case United had no way of rolling back all the automated changes to its reservation system. The automated cancellation of my return flight was either an incompetent customer service representative who didn’t preserve my return reservation when I asked her to, or a computer system that didn’t give her any way of preventing the cancellation. I was downgraded from first class because some marketing genius at United decided to add a new upsell feature to the website — but no one bothered to extend the legacy system they use behind the scenes to capture the new data from the ticket sales process. (This is a common problem with enterprise software these days: companies build new features in their websites but can’t integrate those features properly with their core processing systems.) All of this just reinforces a point I’ve made several times before: the computer systems holding together the world’s largest companies are held together by duct tape and string.
I've got at least a couple of posts I'd like to write on the how bad this side of the business often is. Having seen some of these systems up close, I'm surprised things don't crash and burn more often.

Tuesday, November 18, 2014

A subtle issue with standardized tests

This is Joseph.

Dean Dad has a nice piece on assessment.  A part of it that jumped out was:

Johnson’s argument is subtle enough that most commenters seemed to miss it.  In a nutshell, he argues that subjecting existing instruction to the assessment cycle will, by design, change the instruction itself.  Much of the faculty resistance to assessment comes from a sense of threatened autonomy.  Johnson addresses political science specifically, noting that it’s particularly difficult to come up with content-neutral measures in a field without much internal consensus, and with factions that barely speak to each other. 

He’s right, though it may be easier to grasp the point when applied to, say, history.  There’s no single “Intro to History” that most would agree on; each class is the history of something.  The ‘something’ could be a country, a region, a technology, an idea, an art form, or any number of other things, but it has to be something specific.  Judging a historian of China on her knowledge of colonial America would be easy enough, but wouldn’t tell you much of value.  If a history department finds itself judged on “scores” based on a test of the history of colonial America, then it can either resign itself to lousy scores or teach to the test.
This means that the design of standardized test is crucially important if students and/or teachers are going to be evaluated on them.  For some subjects, e.g. basic math, this may be less controversial but it still involves making choices about what the emphasis will be.  A perfect test is like a perfect teachers -- neither beast really exists in nature. 

But this is critically important for high stakes tests, because what is taught cannot help but be influenced by the test.  If history questions on the high stakes tests are all focused on colonial America, guess what the history section of classes will look like.  In some sense that is okay, insofar as we have a broad consensus as to what should be taught.  But it does make the content of the tests a matter of public policy and concern as much as any other aspect of school instruction.

Monday, November 17, 2014

James Boyle's devastating take down of Robert Bork

What makes this piece so effective is Boyle's refusal to dismiss Bork as a crank or a charlatan. Boyle instead insists on treating Bork as an important figure in conservative thought. It would have been easy to lapse into mockery, but by starting from the explicit assumption that Bork's ideas are worth taking seriously, Boyle is left with an obligation to examine them in painful detail.

From A Process of Denial: Bork and Post-Modern Conservatism

by James Boyle

With this range of defects it is hardly surprising that Mr. Bork chose to shift his ground somewhat. In The Tempting of America he argues that the understanding of the public at the time the Constitution was ratified, rather than the intent of its original authors, should determine its meaning. There is obviously a price to pay for making this change. The best thing about the intent of the framers was that it appealed to the unreflective idea that a document must always mean exactly what its authors meant it to -- no more and no less. The practitioners of original intent can claim with superficial plausibility that their method is the one "natural" way to read the text. They can even claim that we often (though not always) read other legal documents this way -- trying to determine what Congress, or the judge, or the administrator meant by this word or that phrase. Original understanding has less unreflective appeal. Precisely because it is a more sophisticated notion of interpretation, it sacrifices the idea that this is the only credible way to read a text (what about what the words mean out of context, or what the author meant?) the appeal to everyday practice and perhaps even the claim that this is the way we read other legal documents.

This problem is a particularly acute one for Mr. Bork. Throughout The Tempting Of America he explicitly connects his struggles to those going on within other disciplines. As well he might. Most disciplines seem to have rejected the idea that the text can only be read to mean what the author intended. Literary critics and historians have added other methods of reading. How would the text have been understood by its audience at the moment that it was written? How would an audience today understand it? Can the text be illuminated by evidence of the author's subconscious desires or conflicts? How does the text read if we take it as an a-contextual attempt at philosophical argument?

These other methods are referred to collectively (and a little pretentiously) as "the reader's revolution against the author." They represent everything that Mr. Bork finds most reprehensible in today's scholarship. He quotes approvingly a letter from intellectual historian, Gertrude Himmelfarb attacking this impermissible openness to other methods of interpretation. "Any methodology becomes permissible (except of course, the traditional one), and any reading of the texts becomes legitimate (except, of course, that of the author)." (p. 137) If Mr. Bork was still claiming that constitution meant what its authors intended, this would be all well and good. But the trouble with Mr. Bork's revamped and sophisticated version of originalism is that it can no longer appeal to the romantic idea that the imperial will of the author must govern the text. "The search is not for a subjective intention." (p. 144) Instead, he has handed over interpretive competence to the historically located readers of the constitution. For reasons we can only speculate about, he has shifted ultimate interpretive authority from the Framers of the Constitution to the "public of that time." Mr. Bork has joined the reader's revolution.

As I pointed out before, this switch is a costly one for Mr. Bork. To the initial cost of having been seen to adopt the very same methodology so often criticised by conservatives in other academic disciplines, one also has to add the cost of having been seen to change from one dogmatically asserted position to another. Mr. Bork obviously feels this one particularly strongly because he denies having done it. Though he described himself during the hearings as "a judge with an original intent philosophy"(61) and argued in print that "original intent is the only legitimate basis for constitutional decision-making",(62) he says in The Tempting of America that "[n]o even moderately sophisticated originalist" believes the Constitution should be governed by "the subjective intent of the Framers." (p.218) He suggests that no-one could ever have held such a belief, because it would necessarily mean that the secretly held beliefs of the Framers could change the meaning of the document. Thus all (moderately sophisticated) originalists must have believed in original understanding all along. This seems like a red herring. There are many varieties of intentionalism and many varieties of "reader-controlled" interpretation. But allowing the intention of the author to control interpretation is fairly obviously not the same thing as allowing the understanding of the reader to control. Expanding the definition of intentionalism does not turn it into the philosophy of original understanding. The `intention of the Framers and ratifiers' is not the same as `the understanding of the American people at the time.' Mr. Bork seems to find it hard to admit the change.

The most interesting example of Mr. Bork's scholarly method is the point in The Tempting of America he takes sections from his 1986 article The Constitution, Original Intent, and Economic Rights(63) which, as one might suspect from the title, defends original intent, and uses those sections to defend original understanding. At first glance, it appears that he does this by finding the words "original intent" wherever they appear in the article, and simply replacing them by "original understanding." Chunks of text which had reproved Paul Brest with failing to understand that the original intent determines the meaning of the 14th Amendment, are edited, expanded upon, a new philosophy of interpretation inserted. With a quick change of key words they can become reproofs to Paul Brest for failing to understand that original understanding determines the meaning of the 14th Amendment.(64) Even the same counterarguments can be pressed into service. In 1986 for example, "[t]here is one objection to intentionalism that is particularly tiresome. Whenever I speak on the subject someone invariably asks: "But why should we be ruled by men long dead?"(65) In 1990, Mr. Bork finds that "[q]uite often, when I speak at a law school on the necessity of adhering to the original understanding, a student will ask, "But why should we be ruled by men who are long dead." (170) In the era of the word processor, this kind of "search and replace" jurisprudence has its attractions. Still, both the interpretive criteria and the identity of the `dead men' has changed, and Mr. Bork seems uneasy with that fact.(66)


Saturday, November 15, 2014

One of these days I'm going to do a post on genres as fitness landscapes

In the meantime, here's a completely unexpected but surprisingly effective reworking.





Friday, November 14, 2014

What do stock buyback actually do?

Barry Ritholtz passes along an interesting thought from Aswath Damodaran, a professor at New York University.
Before a company calls for a stock buyback, it has risky assets (its operating business) and riskless assets (cash). After the buyback, the company has less of its riskless asset (cash) but also has fewer outstanding shares.

Hence, we end up with a somewhat riskier stock. Damodaran argues, rationally, that a buyback by an all-equity funded company should be a value-neutral transaction. In other cases, the shift should be reflected in by assigning the company a somewhat lower price-earnings ratio.
I don't know enough to comment intelligently on this claim, but it does seem to indicate that, as with so many other stories, the impact of buybacks is considerably more complicated than the experts on CNBC would have you believe,



Thursday, November 13, 2014

Fixing Common Core (or at least a small part thereof) over at the teaching blog

With a nod to David Coleman, last week I did a post called "Deconstructing Common Core" focusing on the homework problems going out under the Common Core banner.


[The generally unproductive question of what is and isn't Common Core comes up frequently. Hopefully, having an actual copyright notice will keep us from wasting any more time on the subject.]

I've become increasing concerned about the direction of mathematics education. Here's a big part of the reason:
I volunteer a couple of times a week to help a group that tutors kids from urban schools. My role is designated math guy. I go from table to table helping kids with the more challenging homework problems.

Recently, I have noticed a pattern in helping with Common Core problems. First I explained them to the students, then I explained them to the tutors.

That may be the most noticeable difference between the mathematics of Common Core and the new math of the 60s. In the summer of love, an advanced degree in mathematics or engineering was sufficient to understand an elementary school student's homework. These days, the tutors with math backgrounds often find themselves more confused than their less analytic counterparts since what they know about solving the problem seems to have nothing to do with what the assignment asks for.

To follow a Common Core worksheet, you really need to have a little knowledge of the underlying pedagogical theories. Unfortunately, if you have more than a little knowledge, you'll find these worksheets extraordinarily annoying because, to put it bluntly, much of what you see was produced by people who had a very weak grasp of the underlying concepts.
I thought it might be of interest to walk through the process of 'fixing' these problems, showing how, with a few changes, these confusing and ineffective problems could be greatly improved.

I used an example of a Common Core problem that went viral a while back.


Here is my proposed fix (which was anticipated by at least one of our regulars).

James Kwak does a valuable service...

...and states the obvious.
The value of a company is supposed to be the discounted present value of its expected future cash flows. Actually, the value of a company is the discounted present value of its expected future cash flows. So it follows that a breakup should only create value for shareholders if it increases future cash flows or lowers the discount rate. Most breakups don’t obviously do either.
This may seem to border on tautology -- "of course, that's the value of a company" -- but if you follow the business page regularly you'll routinely run into strategies and initiatives that make no sense given this definition. Sometimes these decisions are justified in terms of stock price. Other times, flavor-of-the-day notions like disruption are invoked. Occasionally, there is no excuse at all;

Unless you've logged some time with a few major corporations, you can't imagine how much time and money is wasted on unadulterated bullshit largely because C-level executives lose sight of the obvious.



Wednesday, November 12, 2014

Annals of heroic inference

This is Joseph.

Via Andrew Gelman, we get this gem:
One consequence of this is that the number of respondents who report that they are not citizens yet vote or are registered to vote is quite small in absolute terms: in 2010, for example, only 13 respondents — not 13 percent, but 13 out of 55,400 respondents — reported that they were not citizens, yet had voted. Given the ever-present possibility of respondent or coder error, it takes a bit of hubris to draw strong conclusions about the behavior of non-citizens from such small numbers.
Yes, it is very hard to determine characteristics of very rare groups.  For one thing, it's unlikely that you know much about the underlying source population.  So I think the authors are right that it is going to hard to say much about this group, given this instrument.

Tuesday, November 11, 2014

"The unbookable lesson"

I've got a new post up at the teaching blog about the differences in live presentation and other educational media. Check it out if that sort of thing sounds interesting, but if you do, you might want to watch Flight of the Phoenix first.

Monday, November 10, 2014

"I'm sorry, the card says 'MOOPS'.”





Given that we are on the verge of deciding the fate of major policy initiatives based on typos, this seems sadly appropriate.

Another side to the driverless car discussion

For years now, there have been two basic narratives when it came autonomous cars. The first is what I've called the ddulite version: driverless cars are just around the corner and they are about to change our lives in strange and wonderful ways if we can just keep the regulators out of the way. The second version is more skeptical: while lower levels of autonomy are coming online every day, the truly driverless car still faces daunting technological challenges and, even if those are met, these cars may not not have the often-promised impact. (You can probably guess which side I took.)

If you follow this story through the New York Times or the Economist, you are overwhelmingly likely to get the first version, You may not even know that this bright future is contested. If, on the other hand, you talk to the engineers in the field (and I've talked to or exchanged emails with quite a few recently), you are far more likely to get the second.

This recent Slate article by Lee Gomes is one of the very few to take the second approach.
For starters, the Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.

That might not seem like such a tough job for the company that gave us Google Earth and Google Maps. But the maps necessary for the Google car are an order of magnitude more complicated. In fact, when I first wrote about the car for MIT Technology Review, Google admitted to me that the process it currently uses to make the maps are too inefficient to work in the country as a whole.

To create them, a dedicated vehicle outfitted with a bank of sensors first makes repeated passes scanning the roadway to be mapped. The data is then downloaded, with every square foot of the landscape pored over by both humans and computers to make sure that all-important real-world objects have been captured. This complete map gets loaded into the car's memory before a journey, and because it knows from the map about the location of many stationary objects, its computer—essentially a generic PC running Ubuntu Linux—can devote more of its energies to tracking moving objects, like other cars.

But the maps have problems, starting with the fact that the car can’t travel a single inch without one. Since maps are one of the engineering foundations of the Google car, before the company's vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, most of them around the company's headquarters in Mountain View, California.  The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.

...

Noting that the Google car might not be able to handle an unmapped traffic light might sound like a cynical game of "gotcha." But MIT roboticist John Leonard says it goes to the heart of why the Google car project is so daunting. "While the probability of a single driver encountering a newly installed traffic light is very low, the probability of at least one driver encountering one on a given day is very high," Leonard says. The list of these "rare" events is practically endless, said Leonard, who does not expect a full self-driving car in his lifetime (he’s 49).

The Google car will need a computer that can deal with anything the world throws at it.
The mapping system isn’t the only problem. The Google car doesn’t know much about parking: It can’t currently find a space in a supermarket lot or multilevel garage. It can't consistently handle coned-off road construction sites, and its video cameras can sometimes be blinded by the sun when trying to detect the color of a traffic signal. Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. (Google specifically confirmed these present shortcomings to me for the MIT Technology Review article.) Can the car currently "see" another vehicle's turn signals or brake lights? Can it tell the difference between the flashing lights on top of a tow truck and those on top of an ambulance? If it's driving past a school playground, and a ball rolls out into the street, will it know to be on special alert? (Google declined to respond to these additional questions when I posed them.)
...

Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that. "None of this reasoning will be inside computers anytime soon," says Raj Rajkumar, director of autonomous driving research at Carnegie-Mellon University, former home of both the current and prior directors of Google's car project. Rajkumar adds that the Detroit carmakers with whom he collaborates on autonomous vehicles believe that the prospect of a fully self-driving car arriving anytime soon is "pure science fiction."




Saturday, November 8, 2014

The Good Wife makes some really interesting musical choices

... And, frankly, I just look for an excuse link to any piece of music that gets stuck in my head.










Friday, November 7, 2014

Speed boating

Back when I was in banking, there was a term that got batted around quite a bit called speed-boating. The expression was derived from the way a fast-traveling boat can, for a while, outrun its own wake . As long as a certain speed is maintained, the boat will travel smoothly. However, if the boat suddenly slows down it can be swamped when its wake catches up with it.

Here's how the analogy worked in banking. When you are in the business of lending money, both regulators and investors like to keep track of how well you are doing at getting people to pay their loans back. To do this, they would look at the charge-off rate. At the risk of oversimplifying, this rate was basically the number of loans that went bad divided by the total number of accounts that were open during the period in question.

Obviously, if you booked an account and it went bad, this would add one to both the numerator and the denominator which would push your rate closer to 100%. So you would think that it would always be in the banker's best interest to avoid loans that are going to go bad.

The flaw, or at least the loophole, in this assumption is the fact that the ones are not added at the same time. Even in the extreme cases where the customers never make a payment, the loan is not considered bad for a certain interval, generally 90 days or more. If the customers make a few small payments, this could stretch out for six months or year.

Let's say I book an account that goes delinquent after one year. That was a bad deal for the bank – – it lost money due to that decision – – but for one year, having that account actually lowered the bank's charge-off rate. Eventually, of course, this will catch up with the bank, but the reckoning can be delayed if the bank continues to book these bad accounts at an increasing rate.

As with many of our posts, the moral of the story is that numbers don't always mean what you think they mean. You will often see someone pull out a statistic to settle an argument -- "How can you say the business model is unstable? See how long their charge-off rate is?" -- but without understanding the number and knowing its context, you can't really say anything meaningful with it.

"Deconstructing Common Core."

Starting a new thread on Common Core over at the teaching blog. I'll be cross-posting the highlights but if you want to follow the whole thing, click on the link above.




Thursday, November 6, 2014

This may have led to some bias in our sample


Adrian: Once again we've got our friend from military intelligence. Can you tell us what you've found out about the enemy since you've been here?
Adrian as Gomer: We found out that we can't find them. They're out there, and we're having a major difficulty in finding the enemy.
Adrian: Well, what do you use to look for them?
Adrian as Gomer: Well, we ask people, 'Are you the enemy? And whoever says yes, we shoot them.

From Good Morning, Vietnam

Wednesday, November 5, 2014

Obviously the Onion is releasing news under different names


Because, otherwise, this makes no sense at all:
There is no profession anywhere in the country that has such astonishing rules. Good lord-- even if your manager at McDonalds decides you're not up to snuff, he doesn't blackball you from ever working in any fast food joint ever again! Yes, every profession has means of defrocking people who commit egregious and unpardonable offenses. But-- and I'm going to repeat this because I'm afraid your This Can't Be Real filter is keeping you from seeing the words that I'm typing-- Massachusetts proposes to take your license to teach away if you have a couple of low evaluations.
and

One version of the plan even allows for factoring in student evaluations of teachers; yes, teachers, your entire career can be hanging by a thread that dangles in front of an eight-year-old with scissors.
Wow.  Isn't it a good thing that we are sure that this power will not be abused?   I wish I could get Mark to give a perspective on this one, as I think it would have major implications. 

Obviously this sort of scheme would only be workable for major offenses or where the barriers to entry to the profession are exceptionally low.  Adam Smith pointed out that the less security that you give workers, the more expensive they get (for the same quality of worker).  Now removing teacher tenure might or might not be a huge blow depending on the employment regulations in a state.  There are professions (e.g. hedge fund trader) where people can move around a lot during a career.  Teaching isn't well set up for this, but one can at least imagine making it work.

But losing one's license for bad test scores is a massive penalty.  Heck, is it even the case that you'd lose a teaching license (as opposed to merely being fired) for tampering with the tests?  Think carefully because employees will be able to work this one out . . . 

H/T: Mike



Education reform and rent seeking -- textbook edition

A few months ago, Meredith Broussard ran a great expose of the role of textbook companies in the education reform movement (it's about other things as well, but that will get us started). I plan on discussing it more at length but the queue is getting long and I'd like to get this into the conversation as soon as possible.
The end result is that Philadelphia’s numbers simply don’t add up. Consider the eighth grade at Tilden Middle School in Southwest Philadelphia. According to district records, Tilden uses a reading curriculum called Elements of Literature, published by Houghton Mifflin. In 2012–2013, Tilden had 117 students in its eighth grade, but it only had 42 of these eighth-grade reading textbooks, according to the (admittedly flawed) district inventory system. Tilden’s eighth grade students largely failed the state standardized test: Their average reading score was 29.4 percent, compared with 57.9 percent districtwide.

One problem is that no one is keeping track of what these students need and what they actually have. Another problem is that there’s simply too little money in the education budget. The Elements of Literature textbook costs $114.75. However, in 2012–2013, Tilden (like every other middle school in Philadelphia) was only allocated $30.30 per student to buy books—and that amount, which was barely a quarter the price of one textbook, was supposed to cover every subject, not just one. My own calculations show that the average Philadelphia school had only 27 percent of the books required to teach its curriculum in 2012-2013, and it would have cost $68 million to pay for all the books schools need. Because the school district doesn’t collect comprehensive data on its textbook use, this calculation could be an overestimate—but more likely, it’s a significant underestimate.
If you have a moment, check out the webpage for the book in question and see if you can figure out why it's worth $115 (perhaps the Ernest Lawrence Thayer has upped its rates).

Tuesday, November 4, 2014

Con(firmation) artists and consistency -- inevitable David Brook edition

David Brooks is deeply concerned with our divided nation:
This mentality also ruins human interaction. There is a tremendous variety of human beings within each political party. To judge human beings on political labels is to deny and ignore what is most important about them. It is to profoundly devalue them. That is the core sin of prejudice, whether it is racism or partyism.
There are some important points (and some questionable ones) made in this column. I'm not sure the 'ism' adds much to the discussion, but it's a topic that deserves more attention (and more pieces like this 2012 episode of This American Life).  It's also a topic for another post.

For now let's stick with the con(firmation) artists thread. That's my somewhat cumbersome term for a school of journalists who get away with cliched, factually-challenged reporting because they appeal to the prejudices of their target audiences and, more importantly, of their peers and superiors. It is a group strongly associated with the New York Times and best exemplified by Brooks.

Much of Brooks' success comes from his ability to craft readable, scholarly sounding pieces that use statistics and anecdotes to reinforce class stereotypes, particularly those held by people on the top have toward people on the bottom. Frequently, when he could not find suitable support for his arguments, Brooks has used facts that aren't actually true. This might simply be the product of sloppiness, but it should be noted that the sloppiness always seems to occur in one direction.

At the risk of oversharpening, if you take the paragraph above and substitute the concept of class in for party, every criticism Brooks makes applies to much if not most of his own work.  He has made a remarkably successful career out of understating the "tremendous variety of human beings" and judging them based on class and other crude demographic labels. What's worse, he continues to resort to statements that simply aren't true (like this claim about vaccines) in order to reinforce various stereotypes.

Does this behavior qualify as "the core sin of prejudice"? I'm not sure, but Brooks' partyism column certainly demonstrates a stunning lack of self-awareness.

Monday, November 3, 2014

A huge conflict of interest -- more from Meredith Broussard's Atlantic piece on textbook companies and standardized tests

[The scheduler sometimes does weird things. This was meant to be the second of two excerpts. Sorry about the confusion.]

You need to read this:

Put simply, any teacher who wants his or her students to pass the tests has to give out books from the Big Three publishers. If you look at a textbook from one of these companies and look at the standardized tests written by the same company, even a third grader can see that many of the questions on the test are similar to the questions in the book. In fact, Pearson came under fire last year for using a passage on a standardized test that was taken verbatim from a Pearson textbook.

The issue often has as much to do with wording as it does with facts or figures. Consider this question from the 2009 PSSA, which asked third-grade students to write down an even number with three digits and then explain how they arrived at their answers. Here’s an example of a correct answer, taken from a testing supplement put out by the Pennsylvania Department of Education:


Here’s an example of a partially correct answer that earned the student just one point instead of two:


This second answer is correct, but the third-grade student lacked the specific conceptual underpinnings to explain why it was correct. TheEveryday Math curriculum happens to cover this rationale in detail, and the third-grade study guide instructs teachers to drill students on it: “What is one of the rules for odd and even factors and their products? How do you know that this rule is true?” A third-grader without a textbook can learn the difference between even and odd numbers, but she will find it hard to guess how the test-maker wants to see that difference explained.





Failures in targeted marketing

One of these days, I need to post a nice long thread on targeted marketing, benefits vs. limitations, promise vs. hype. In the meantime, I'll collect as many examples as I can of targeting done well, targeting done badly and, in this case, of targeting not done at all.

In a comment to Ken Levine's previously mentioned traffic report prank.
In Canada, Closed Captioning is sponsored and announced during the broadcast. Frequently, these sponsors are new album releases - Katy Perry, Rihanna, Coldplay.
Assuming that the language of the captioning is the same as the language of the broadcast and is not used for translation (not an absolute given, I suppose, in Canada), think about the target demographic of captioning.