Tuesday, February 15, 2011

"Human see; human do."

There was a fascinating interview on NPR's Fresh Air earlier today. I particularly enjoyed this section:
If you're just joining us, we're speaking with V.S. Ramachandran. He is a behavioral neurologist and author of the new book "The Tell-Tale Brain: A Neural Scientist's Quest for What Makes Us Human."

You write a lot about mirror neurons and the role that they played on our evolution. You want to just tell us a little bit about that?

Dr. RAMACHANDRAN: Well, mirror neurons were not discovered by us, obviously. They were discovered by Giacomo Rizzolatti in Parma, Italy, and his colleagues. And what they refer to is in the front of the brain, the motor and pre-motor cortex, there are neurons that issue commands to your hands and other parts of your body to perform specific actions, semi-skilled actions, skilled actions or even non-skilled actions. So these are motor-command neurons which orchestrate specific sequence of muscle twitches for you to reach out and grab a peanut, for example, or put it in your mouth.

What Rizzolatti and his colleagues found was some of these neurons, as many as 20 percent or 30 percent, will fire not only when - let's say I'm measuring mirror neuron activity in your brain. So when you reach for a peanut, these neurons fire. But the astonishing thing is these neurons will also fire when you watch me reaching for a peanut so these are promptly dubbed mirror neurons for obvious reasons. So it's as though your brain is performing a virtual reality simulation of what's going on in my brain, saying, hey, the same neuron is firing now when he's doing that as would fire when I reach out and grab a peanut, therefore, that's what that guy's up to.

He's about to reach out and grab a peanut. So it's a mind-reading neuron. It's essential for you seeing other people as intentional beings who are about to perform certain specific intended actions.

DAVIES: And that might have helped us learn from one another and thereby advanced culturally far beyond our...

Dr. RAMACHANDRAN: That's correct. That's the stuff - that's kind of an obvious behind-site, but that's the claim I made, oh, about 10 years ago in a website run by Brockman called "Edge." And what I pointed out was - and others have pointed this out, too, is that mirror neurons obviously are required for imitation and emulation. So if I want to do something complicated that you're doing and I want to imitate it, I have to put myself in your shoes and view the world from your standpoint. And this is extremely important.

It seems like something trivial, you know, mimicry, but it's not. It's extremely important because imitation is vital for certain types of learning, rudimentary types of learning. These days you learn from books and other things, but in the early, early days when hominids were evolving, we learned largely from imitation. And there's a tremendous acceleration of evolution illusionary process. What I'm saying is maybe there are some outliers in the population who are especially smart simply because of genetic variation, who have stumbled, say, accidentally on an invention, like fire or skinning a bear.

Without the mirror neuron system being sophisticated, it would have died out, fizzled out immediately. But with a sophisticated mirror neuron system, your offsprings can learn that technique by imitation so it spreads like wild fire horizontally across a population and vertically across generations. And that's the dawn of what we call culture and therefore, of civilization.

Monday, February 14, 2011

Google can make you disappear

The SEO's may have it coming but this is still creepy:

Interviewing a purveyor of black-hat services face-to-face was a considerable undertaking. They are a low-profile bunch. But a link-selling specialist named Mark Stevens — who says he had nothing to do with the Penney link effort — agreed to chat. He did so on the condition that his company not be named, a precaution he justified by recounting what happened when the company apparently angered Google a few months ago.

“It was my fault,” Mr. Stevens said. “I posted a job opening on a Stanford Engineering alumni mailing list, and mentioned the name of our company and a brief description of what we do. I think some Google employees saw it.”

In a matter of days, the company could not be found in a Google search.

“Literally, you typed the name of the company into the search box and we did not turn up. Anywhere. You’d find us if you knew our Web address. But in terms of search, we just disappeared.”

The company now operates under a new name and with a profile that is low even in the building where it claims to have an office. The landlord at the building, a gleaming, glassy midrise next to Route 101 in Redwood City, Calif., said she had never heard of the company.

USA Today has some bad graphs but at least it's not the New York Times

The following quote was included in one of Andrew Gelman's recent posts:
Is this the worst infographic ever to appear in NYT? USA Today is not something to aspire to.
This strikes me as deeply unfair to USA Today. The paper has certainly run its share of bad graphs but these take things to a new level. It is as if the NYT used illustrations from "How to Lie with Statistics" as a starting point and then tried to top them.

Here's the "View of the U.S." where the lower the icon is, the higher its approval.



And here's the "U.S. Pakistan Policy" where the scrolls are arranged so you can't really compare their sizes (I initially thought they were going for some depth effect).

And here's the "Greatest Threat" which takes Huff's height/volume examples to the next level by using images of different shapes and densities.

Finally there's this amazing piece of work:

Just glancing at this you would probably conclude that the amount of blue in the circles corresponds to percentage in agreement. For example, looking at the middle circle you'd assume that almost all of those surveyed were in disagreement. You'd be wrong. More agreed than disagreed. (This was also noted by one of the commenters on Gelman's site.)

While they don't quite match this, these graphs may be the worst we've seen from a major paper in recent memory.




[adapted in part from a comment I left on Andrew Gelman's site]

Great moments in metawork

As a footnote to this post, I once spent an entire meeting (at a corporation that shall remain nameless) writing a team mission statement based on the intro to Star Trek. It consisted of lines like this:

"To seek out new data and new analytic techniques."

The attendees were all experienced modellers and data miners, some fairly high ranking with commensurate salaries. Everyone in that room had something else they needed to be doing and, except for the senior manager present, I doubt that anyone present saw any real value in the exercise. Still, word had come down from the top that every distinct subgroup in the company needed its own mission statement so there we were, boldly splitting that famous infinitive one more time.

On the bright side, at least this was one time we didn't have to have a pre-meeting.

"The Economics of Blogging and The Huffington Post"

After the election season, my regular visits to FiveThirtyEight tapered off then simply came to a stop.

That might have been a mistake on my part.

(thanks again to Felix Salmon)

Concerns with data driven reform

Dead Dad has a post on Achieving the Dream, which is intended to improve outcomes at community colleges. Two of his commentators had really interesting insights. Consider mathguy:

Consider the effect of No Child Left Behind. I've seen a noticeable decline in basic math skills of students of all levels in the last 5 years. Every year, I will discovered a new deficiency that was not seen from the previous years (we are talking about Calculus students not able to add fractions). Yet NCLB was assumed to be "working" since the scores were going up. It seems that K-12 was devoting too much time preparing the students for tests, at the cost of killing students' interest in math, trading quality instruction for test-taking skills. Is NCLB a factor in the study? Are socio-economic factors examined in the study?


or CC Physicist who stated:

I look at what Asst Prof wrote as an indication that a Dean, chair, and mentor didn't do a good job of getting across the history of assessment. Do you know what "Quality Improvement" program was developed a decade earlier, and what the results were of the outcomes assessment required from that round of reaffirmation of accreditation? Probably not, since we have pretty good communication at our CC but all the negative results from our plan were swept under the rug. The only indication we had that they weren't working was the silent phase-out of parts of that plan. Similarly, data that drove what we did a decade ago were not updated to see what has changed.


I think these two statements capture, very nicely, the main issue I have with the current round of educational reform. One, if you make meeting a specific metric (as a measure of on underlying goal) a high enough priority then people will focus on the metric and not the actual goal. After all, if you don’t then your name could be posted in LA Times although with your underperformance on the stated metric. So we’d better be sure that the metric that we are using is very robust in its relation to the underlying goal. In other words, that it is a very good representation of the curriculum that we want to see taught and measures the skills we want to see students acquire.

Two, trust in evidence based reform requires people to be able to believe the data. This is one area where medical research is leaps and bounds ahead of educational research. A series of small experiments are attempted (often randomized controlled trials) while the standard of care continues to be used in routine patient care. Only when the intervention shows evidence of effectiveness in the trial environment is it translated into routine care.

In education, such trials are rare indeed. Let us exclude natural experiments for the moment; if we care enough to change the education policy of a country and to violate employment contracts then it’s fair to hold ourselves to a high standard of evidence. After all, the lotteries (for example) are not a true experiment and it’s hard to be sure that the lottery itself is completely randomized.

The problem is that educational reforms look like “doing something”. But what happens if the reforms are either counterproductive or ineffective (and implanting an expensive reform that does nothing has a high opportunity cost). The people implementing the reforms are often gone in five to ten years but the teachers (at least now while they have job security) remain to clean up the wreckage afterwards.

I think that this links well to Mark's point about meta-work: it's hard to evaluate the contributions of meta-work so it may look like an administrator is doing a lot when actually they are just draining resources away from the core functions of teaching.

So when Dead Dad notes: “Apparently, a national study has found that colleges that have signed on to ATD have not seen statistically significant gains in any of the measures used to gauge success.” Why can’t we use this evidence to decide that the current set of educational reform ideas aren’t necessarily working well? Why do we take weak evidence of the decline of American education at face value and ignore strong evidence of repeated failure in the current reform fads?

Or is evidence only useful when it confirms our pre-conceptions?

Metawork

A business analyst I used to work with had a theory about metawork. His definition of the term was work about work. He cited HR departments as the classic example.

As he liked to explain it, metawork is not, in and of itself, a bad thing. A certain amount is necessary for a well-functioning organization. It's not unusual for new companies to fail because of an overly rich work to metawork mixture.

But, my friend went on, metawork is like a gas -- it expands to fill all available space, both because it's easy to create metawork projects and because those projects can often be stretched to whatever time is available to them (you can always schedule an extra meeting). Furthermore, once it has established a foothold, it has a way of becoming part of the corporate culture.

There are also other reasons why companies tend to grow more metawork heavy as they mature and expand:

Major metawork initiatives tend to be top down (no customer ever said, "I like this company's products but I have a feeling they aren't having enough team-building seminars."). From a career standpoint, it is always a good idea to give a high priority to projects that people above you consider important;

Metawork projects almost always sound good. They have impressive sounding goals like improving efficiency, raising morale, making the company more nimble and responsive, or moving to data-driven strategies (more on that one in future posts). They suggest big-picture, forward-thinking approaches that make fixing problems like billing glitches seem prosaic, perhaps even trivial;

Metawork tends to be safer than the other kind. Let's say a company launches two big and badly-conceived initiatives, a new product launch and a 'data-driven' reworking of the project management process. The product sells badly and the new process eats up man hours without making projects run faster or smoother. Both end up costing the company about the same amount of money, but the product's failure is public and difficult to ignore while the process's failure is internal and can be denied with some goalpost moving and willing suspension of disbelief (something that's easy to generate for a VP's pet project);

As mentioned before, metawork isn't all pre-meetings and mission statements. Some kinds of metawork are essential (payroll comes to mind). Other kinds can help a company improve its profitability and stability (like employee morale studies in a labor-intensive industry with high turnover). Employee can be resistant to some of these good, important initiatives, but it's worth keeping a couple of facts in mind:

There is a lot of bad metawork out there;

The employees who most resent doing metawork are often the employees who are doing the most of the other kind of work.

Sunday, February 13, 2011

Tiger moms are nothing...



Click for the full strip.

One more simple game for your weekend

You have been invited to play a dice game with Pierre and Blaise. The game is played with three dice marked as follows:

Dice A {2,2,4,4,9,9}
Dice B {1,1,6,6,8,8}
Dice C {3,3,5,5,7,7}

The game has three rounds. First you roll against Pierre, then you roll against Blaise, then Pierre and Blaise roll against each other. The winner of each round is the one who rolls the higher number. The overall winner is the player who wins the most rounds.

Which die should you choose?

[Here's the relevant link (try not to look at the status bar -- the address gives away too much). It's a fun, trivial oddity but it raises some interesting questions about how numbers we trust can do unexpected things. More on that later.]

Saturday, February 12, 2011

Weekend Gaming -- Hexagonal Chess

I don't have any data to back this up, but I've always thought that the bulk of the benefits from learning a game -- improving problem solving, pattern recognition, strategic thinking -- come late in the beginning of the process, just after the rules are internalized. If that's true (and maybe even if it isn't), you might be able to extend that period of intense learning by modifying a game so that old rules are seen in new ways.

Case in point, Gliński's hexagonal chess.

Gliński's chess variant is hugely popular in Europe (more than 100,000 sets have been sold). You can get the rules at my Kruzno site, but you can probably figure most of them out for yourself. The only pieces that might give you trouble are the bishops and,to a lesser extent, the knights.


Bishops come in three colors, which points out an interesting topological feature of a hexagonal grid which I'm betting you can spot for yourself.

It's a strange and intriguing game and yet another reason why every house should have a hexboard.

Friday, February 11, 2011

"Why the Efficient Market Hypothesis (Weak Version) Says Nothing about the Ability to Identify Bubbles"

I found this post by Peter Dorman interesting for a couple of reasons. First because it was, well, interesting -- it had something insightful to say about an important subject -- and second because it took a question that is normally framed in terms of arguing assumptions (are markets efficient?) and showed that the question had nothing to do with those assumptions.
Let’s put aside the possibility that even the weak EMH can be wrong from time to time. We don’t need to go there; the error is more basic than this.

Let’s put ourselves back in 2005. It is two years before the unraveling of the financial markets, but I don’t know this; all I know is what I can see in front of me, publicly available 2005 data. I can look at this and see that there is a housing bubble, that prices are rising far beyond historical experience or relative to rents. The “soft” warning signs are all around me, like the explosion of cheap credit, the popularity of credit terms predicated on ever-rising prices, and the talk of a new era in real estate. Based on my perceptions, I anticipate a collapse in this market. What can I do?

If I am an investor, I can short housing in some fashion. My problem is that I have no idea how long the bubble will go on, and if I take this position too soon I could lose a bundle. In fact, anyone who went short in 2005 and passed on the following two years are price frothery grossly underperformed relative to the market as a whole. Indeed, you might not have the liquidity to hold your position for two long years and could end up losing everything. Of course, it is also possible that the bubble could have burst a year or two early and your bets could have paid off. What the EMH tells us is that, as an investor, not even your prescient analysis of the fundamentals of the housing market would enable you to outperform more myopic investors or even a trading algorithm based on a random number generator.

The logical error lies in confusing the purposes of an investor with those of a policy analyst. Suppose I work for the Fed, and my goal is not to amass a personal stash but to formulate economic policies that will promote prosperity for the country as a whole. In that case, it doesn’t much matter whether the bubble bursts in 2006, 2007 or 2010. In fact, the longer the bubble goes on, the more damage will result from its deflation. At the policy level, the relevant question is whether trained analysts, assembling data and drawing on centuries of experience in financial manias, can outperform, say, tarot cards in identifying bubbles. The EMH does not defend tarot.

To profit from one’s knowledge of a market condition one needs to be able to outperform the mass of investors in predicting market turns, which the EMH says you can’t do. Good policy may have almost nothing to do with the timing of market turns, however.

The Good Principal Principle

This looks promising. While it is difficult to build a model for good management, this still strikes me as a simpler problem than modelling effective teaching. For one thing, K through 12 teaching includes a large management component. More importantly, the data on teachers runs into some nasty nesting issues. Things are much cleaner when you go up a level.

Yet each school year thousands of principals beat the odds and do excel, women and men who love their leadership positions, relish the challenges and take pride in running schools that perform well year after year. Who are these people? And what are they doing that so many others aren’t?

“We know that principals matter for a school’s success, but we don’t know much about why and how they matter,” says Jason Grissom, an assistant professor of public affairs in MU’s Truman School of Public Affairs. Grissom and Susanna Loeb, a professor of education at Stanford University, are working to provide answers, thanks in part to a $1 million grant from the Institute of Education Sciences, the self-described “research arm” of the U.S. Department of Education.

“Our goal at the end of this study is to be able to offer some tangible recommendations for making principals more effective in terms of improving student outcomes,” Grissom says. “We’re excited about the proposal because it’s pretty ambitious. The kind of study we’ve proposed is essentially the first of its kind.”

At least we've gotten more efficient at some things

From Wikipedia:

I Am Number Four is a young adult science fiction novel by Pittacus Lore, the pen name of authors James Frey and Jobie Hughes. The book was published by HarperCollins on August 3, 2010,[1] and has currently spent 6 weeks on the children's chapter of The New York Times Best Seller list.[2]

DreamWorks Pictures bought the rights to the film in June 2009; it will be released on February 18, 2011. The novel is the first of a proposed six-book series.[3]

Back in the old days, you used to have to publish one before they started the film series. By way of comparison, it took James Bond nine years, ten best sellers and the implied endorsement of JFK before it made it to the screen.

E-verify

This article linked by Thoreau is pretty frightening. The main issue seems to be the combination of error rate and lack of transparency. Two points of great interest:

And the results have been devastating. U.S. workers excited to start a new job are instead thrust into bureaucratic limbo as they try to sort out government mistakes. Their new employers hire, then fire them and never tell them why; or worse, they might never be hired in the first place and not know why . . . According to government reports, the program (even after years of work) has a stubbornly high error rate and well-documented problems in attempts to resolve those errors. According to the most conservative numbers, at least 80,000 American workers lost a new job last year because of a mistake in the system. If E-Verify were mandatory, that number would rise to 770,000


and

Ultimately the most brutal irony is that E-Verify doesn’t work. According to government-required audits, 54 percent of those not allowed to work in the U.S. were actually approved by the system.


Even worse:

The government mechanism to fix errors is a Kafkaesque tragedy. There is currently no court remedy to force Immigration and Customs Enforcement to fix an error. Many times those errors are as simple as an incorrect data entry or a name change, but in order to uncover the error, workers have to file letters with different parts of the agency seeking copies of their records


The part that is the most painful about this process is that being denied employment is a cost (especially in the current employment environment). What happens if we refuse a person employment (incorrectly) and their unemployment insurance is about to expire? Worse, if they do not know why they were flagged then it might be years before they find out what went wrong.

This doesn't seem like a good idea at all.

Administration

I was reading Mark's post today on principals and I thought it was interesting to see that Dean Dad has a post from the other side of the fence today as well. Some of the points in the Dead Dad post are extremely insightful. Consider the following question:

I’ll answer the question with another question. Good, strong, solid, peer-reviewed scientific data has made it abundantly clear that poor eating habits lead to obesity and all manner of negative health outcomes. There’s no serious dispute that obesity is a major public health issue in the US. And yet people still overeat. Despite reams of publicity and even Presidential support for good eating and exercise habits, obesity continues to increase. Why?


In other words, reform is hard to do even when you know where you want to go. In cases where the evidence is weak or where budgets are falling, the problem gets even worse. And, of course, right now we are experiencing a fall in most education budgets in the United States.

However, it was interesting how Dead Dad was unable to resist worrying about tenure as a barrier to reform:

There’s also a fundamental issue of control. Faculties as a group are intensely protective of their absolute control of the classroom. Many hold on to the premodern notion of teaching as a craft, to be practiced and judged solely by members the guild. As with the sabermetric revolution in baseball, old habits die hard, even when the evidence against them is clear and compelling. There’s a real fear among many faculty that moving from “because I say so” to “what the numbers say” will reduce their authority, and in a certain sense, that’s true. In my estimation, this is at the root of much of the resentment against outcomes assessment.

Even where there’s a will, sometimes there just isn’t the time. It’s one thing to reinvent your teaching when you have one class or even two; it’s quite another with five. And when so many of your professors divide their time among different employers, even getting folks into the same room for workshops is a logistical challenge.

Of course, accountability matters. Longtime readers know my position on the tenure system, so I won’t beat that horse again, but it’s an uphill battle to sell disruptive change when people have the option of saying ‘no’ without consequence. The enemy isn’t really direct opposition; it’s foot-dragging.


I think that this line of thinking may also be part of why there is such a huge push for reform of teacher job security. Administrators are under enormous pressure to reform the education system and teachers may be very resistant.

Of course, one element that may be left out is that the teachers may be resistant to change for good reasons. When you have been in an organization long enough, you realize that a lot of reform can be about trying "old ideas all over again". These reforms can be both time consuming and ineffective. They may even lower outcomes due to the friction of implementation.

Let us consider a business analogy. One way that corporations tried to handle bad outcomes is a series of "re-orgs". These changes in structure have two good properties. One, the people in charge seems to be doing something to address issues by making changes. Two, a series of re-organizations can make it very hard to track a long term pattern of bad management as units break apart too often for performance to be easily tracked.

The ability of tenured people to resist cosmetic reforms is, obviously very frustrating to administrators who have little ability to influence the organization but seemingly unlimited accountability. However, endless re-organizations did not, in the end, help corporations like General Motors. Instead, they may well have accelerated the decline by focusing on making changes that were more cosmetic than effective. So do we really need to import the worst practices of modern corporations into the educational system?

[This post is also relevant -- Mark]