Thursday, June 4, 2015

Yes, I am taking the pro side here on autonomous vehicles

In a comment on my recent relatively positive post on autonomous vehicles, Joseph points us to a Megan McArdle article that takes a different view.

While it is good to see that the conventional wisdom is starting to acknowledge some of limitations with driverless cars, I still have quite a few problems with the piece. McArdle makes some good points about the labor implications, but she does not seem to have a strong grasp of the technological or the implementation issues involved with using autonomous vehicles for long-haul trucking. We can get back to implementation later; for now let's talk about tech.

Here's McArdle:
You hear a lot about how Google cars have driven an amazing number of miles without accidents. You hear less, however, about how they have achieved this feat: by 3-D mapping every inch of those roads so that the car has a database of every stationary object, from traffic lights to guardrails. That allows the car to devote its processing power to analyzing the movement of objects that aren't in its database.

Such mapping is incredibly labor intensive, which is why, according to Lee Gomes, those amazing mile counts that Google's driverless cars are racking up "are the same few thousand mapped miles, driven over and over again." Most of them are near Google's headquarters in Mountain View, a place that gets only 15 inches of rain a year and never has snow or ice -- three common weather hazards that long-haul truckers must frequently contend with.

Just getting Google's technology to a point where we could have self-driving trucks would require mapping every inch of the nation's more than 164,000 miles worth of highways. But then what do you do with the truck? You're probably going to have to map some of the roads that connect to those highways too. And then constantly remap them, because things change all the time. You'll also have to teach the computer system what to do in a blinding snowstorm on Wolf Creek Pass. As we wrote in January, "The technology giant doesn’t intend to offer a self-driving car to areas where it snows in the near term."
McArdle makes a couple of common mistakes: assuming that, because Google dominates the coverage of driverless cars, it also dominates the research (which we'll get to later); and assuming that what is difficult for humans is difficult for robots and vice versa.

Rain and snow are problematic for us humans both because they can limit visibility and because they tend to create very complex physics problems that have to be solved in a fraction of a second. Bad weather visibility is much less of an issue with autonomous vehicles* than it is with human drivers while classical physics problems are the sort of thing that roboticists are very good at.

Along similar lines, McArdle observes [emphasis added] "But it seems like getting from there to fully automated trucks--necessarily huge, heavy, and  capable of horrific damage, with handling capabilities that change depending on the load, and a stopping distance almost twice that of a car at high speeds, will probably take a while." Yes, this will take a while, but not for the reasons McArdle imagines. Load effects and long stopping distance do make truck driving much more difficult for humans, but for computers they just represent simply another set of parameters. Furthermore, the biggest factor in real-life stopping distance is often reaction time, an area where computers have a distinct advantage.

Nor does the fair-weather testing complaint hold up. It is true that Google has largely limited its testing to clement conditions, but you certainly can't say the same for the upcoming Volvo test in, you know, Sweden.


Google's PR department has done a masterful job identifying the company with autonomous vehicles. This is not simply a matter of corporate ego. As I said earlier:
Google has a lot of reasons to want to be seen as a diversified, broadly innovative technology company, rather than as a very good one-trick pony cashing in on a monopoly (possibly two monopolies depending on how you want to count YouTube). A shiny reputation helps to keep stock prices high and regulators at bay.
It is enormously telling that McArdle cites Google ten times in her article while she doesn't mention Daimler by name and she never refers to Volvo at all.

* As far as I can tell, Daimler's prototype is doing its mapping independently in real time. While impressive, I'm sure the production models will share data and will also rely on existing maps.

Wednesday, June 3, 2015

CLEPs and MOOCs

Dean Dad makes a tremendously important point about MOOCs:

As a commenter correctly noted, there’s nothing stopping someone now from taking a MOOC in a “gen ed” area and then taking a CLEP exam to get credit.  CLEP fees are often lower than even community college tuition.  The ASU model is a more expensive and clunkier version of CLEP.  The MOOC-to-CLEP option has existed for a couple of years now, but students haven’t taken advantage in significant numbers.  
In case you're not familiar:
The College Level Examination Program (CLEP) is a group of standardized tests created and administered by College Board. These tests assess college-level knowledge in thirty-six subject areas and provide a mechanism for earning college credits without taking college courses. They are administered at more than 1,700 sites (colleges, universities, and military installations) across the United States. There are about 2,900 colleges which grant CLEP credit. Each institution awards credit to students who meet the college's minimum qualifying score for that exam, which is typically 50 to 60 out of a possible 80, but varies by site and exam. These tests are useful for individuals who have obtained knowledge outside the classroom, such as through independent study, homeschooling, job experience, or cultural interaction; and for students schooled outside the United States. They provide an opportunity to demonstrate proficiency in specific subject areas and bypass undergraduate coursework. Many take CLEP exams because of their convenience and low cost (typically $15) compared to a semester of coursework for comparable credit.

I plan to spend a lot of time this summer writing about better using CLEPs  and improving MOOCs. For now though, I want to get a couple of big points.

The Internet has a way of producing deceptively large numbers. This has certainly been true with MOOCs. Articles have breathlessly reported huge enrollments despite the fact that for online classes enrollment is an almost meaningless statistic. When we have tried to assign meaningful metrics to online classes, they have tended to do very poorly. CLEP-usage would appear to be another example.

CLEP exams are a well-established, easy, and cheap way for students to get college credit for taking online courses, but very few seem to be taking advantage of it. That's a bad sign, but it does suggest a way forward.

Now, just to be clear, I am not saying that CLEP tests are a perfect solution for this problem – – I am certain we could come up with a better system, particularly once we have some experience to build on – but for the time being these exams are probably our best option and the fact that we're not seriously exploring them indicates a deeper lack of seriousness about MOOCs.

Tired blood and Nazi superman vitamin pills

One of the reasons I enjoy going through old pop culture ephemera is the perspective it can give on the way popular thinking has evolved or, in some cases, stayed the same. Check out the topics a popular comic book superhero chose when making conversation in 1945.



Having spent a lot of time recently going through food related ads and instructional films, I've noticed a strong mid-Twentieth Century fascination with the scientific basis of nutrition. Much of this expressed itself as pseudo-science, but it was driven by a string of real breakthroughs. Remember, he very concept of a vitamin was largely a Twentieth Century discovery.

From Wikipedia:

The discovery dates of the vitamins and their sources
Year of discoveryVitaminFood source
1910Vitamin B1 (Thiamine)Rice bran
1913Vitamin A (Retinol)Cod liver oil
1920Vitamin C (Ascorbic acid)Citrus, most fresh foods
1920Vitamin D (Calciferol)Cod liver oil
1920Vitamin B2 (Riboflavin)Meatdairy productseggs
1922(Vitamin E) (Tocopherol)Wheat germ oil,
unrefined vegetable oils
1926Vitamin B12 (Cobalamins)Livereggs, animal products
1929Vitamin K1 (Phylloquinone)Leaf vegetables
1931Vitamin B5 (Pantothenic acid)Meatwhole grains,
in many foods
1931Vitamin B7 (Biotin)Meatdairy productseggs
1934Vitamin B6 (Pyridoxine)Meatdairy products
1936Vitamin B3 (Niacin)Meatgrains
1941Vitamin B9 (Folic acid)Leaf vegetables
It is easy to see how all of this got people thinking about vitamins as an almost mystical cure-all. Even instructional US govt. films included lines like "The Nazis are supposed to have a superman vitamin pill." Of course, the mystique of the vitamin combined with the marketing power of television presented huge profit potential.




After vitamins, food advertisements loved to talk about 'energy.' They were always vague about exactly what they meant by the word but they wanted you to know their products were packed with it.



Companies even promoted dextrose as healthy.



It's easy to mock, but you have to remember how fast nutritional science was advancing. We've had decades to process this information and I'm not sure we're doing that much better.

Tuesday, June 2, 2015

Dining tip – when eating out with Maureen Dowd, don't share an appetizer

Given her attitude toward people in the service industry, I would be nervous about what goes on in the kitchen.

Brad DeLong handles today's garbage collection.

Why Don't New York Times Writers Possess Any Awareness of Their Presentation-of-Self?

Before we talk about 21st Century technology...

...we need to spend some time thinking about the 20th Century technology that got us here.
A controlled-access highway provides an unhindered flow of traffic, with no traffic signals, intersections or property access. They are free of any at-grade crossings with other roads, railways, or pedestrian paths, which are instead carried by overpasses and underpasses across the highway. Entrances and exits to the highway are provided at interchanges by slip roads (ramps), which allow for speed changes between the highway and arterial roads and collector roads. On the controlled-access highway, opposing directions of travel are generally separated by a median strip or central reservation containing a traffic barrier or grass. Elimination of the sources of potential conflicts with other directions of travelers dramatically improves safety, fuel consumption, and travel times.
It turns out that many of the innovations mid-20th Century engineers came up with to maximize throughput (controlled access, divided lanes, etc.) also tend to make the jobs of today's engineers much easier when it comes to autonomous vehicles. This relationship between technologies will be a big factor when we get back to this.

Monday, June 1, 2015

Jack Shafer, poster child [repost]

You might have heard about Bruce Bartlett's recent paper on the effect that Fox News has had on the Republican Party. I don't entirely agree but it makes some interesting points about the way information flow affects politics.

The paper raises serious questions not just about Fox but about the entire journalistic ecosystem. For a long time and for a variety of reasons, major mainstream outlets like the New York Times and Politico have been reluctant to call Fox out on its most egregious lapses (these days, the news media is pretty much one long row of glass houses).

It's not surprising that Politico media apologist Jack Shafer took umbrage. If you really want to read Shafer's comments, you can find a link in this piece by Eric Boehlert, but, before you do, I would recommend that you take a few minutes to review Shafer's rather... flexible standards for journalistic quality and ethics.



More on journalistic tribalism
from WEDNESDAY, APRIL 3, 2013

Having brought up the charge in a previous post, I should probably take a minute to spell out exactly what I'm talking about. I'm using a very broad reading of the term 'tribalism' (perhaps so broad I should say something like 'tribalism and other social psych phenomena'). The traits I'm thinking of include:

1. Us/them mentality;

2. Excessive reliance on in-group social norms;

3. Deferring to and preserving hierarchies;

and as a consequence

4,   A tendency to use different standards to judge interactions based on the relative positions of the parties.

There is inevitably going to be a degree of subjectivity when deciding who goes where in the hierarchy, but I think it's fairly safe to say that Maureen Dowd and (till his death) Michael Kelly were in the innermost circle with writers like David Brooks and most prominent, established Washington and, to a lesser degree, New York journalists fairly close.

In this tribal model, it makes perfect sense that Politico would view Chris Hughes' (outsider) request for a small change in the copy of Timothy Noah (insider) as a major affront. It also explains Politico's attacks on Nate Silver (outsider) when his work started making established pundits (insiders) look bad.

The press corps's treatment of Al Gore in 2000 is another case in point. Following the lead of Dowd and Kelly and reinforced by a general dislike of the candidate, the group quickly established social norms that justified violating the most basic standards of accuracy and fairness.

The poster child for this kind of journalistic tribalism is Jack Shafer, or at least he was a few years ago when I was first experimenting with blogging. One of my main topics was the press's inability to face up to its problems and Shafer was the gift that kept on giving (I haven't read him much since). That blog is gone now but I still have my notes so here are some highlights.

Shafer was openly disdainful of readers and generally dismissive of their interests which is an extraordinary starting point for a journalism critic. Consider this passage from the aptly named "Why I Don't Trust Readers"
I'm all for higher standards, but I draw the line when journalists start getting more complaints about less serious professional lapses. Serious: Plagiarism, willful distortion, pattern of significant errors, bribe-taking. Not serious: campaign donations in the low three-figures for reporters distant from that beat; appearance of conflict of interest; a point of view; friendships with the rich and powerful.
First, notice the first item on the list. Plagiarism is certainly a serious offense, but the other serious offenses are the sort of things that can destroy people's lives, conceal crimes and enable corruption. Even more interesting is what didn't make the list: unintentional distortion due to laziness or bias; patterns of minor errors; isolated cases of serious errors due to negligence; selective reporting (as long as it doesn't rise to the level of distortion); failure to dig into important aspects of a story; cozy relationships with subjects as long as it doesn't involve the quid pro quo of a bribe.

What's important here was the victimology. In plagiarism, the primary victim is a fellow journalist. In all of these other cases, the primary victim is either the subject or the reader. Shafer was a tribalist and his main objective was almost always the defense of his tribe and its hierarchy.

There's a remarkable inverse correlation between the rank of Shafer's subjects and the harshness with which he treats them.  This is particularly apparent when different subjects of the same article have different positions. Shafer provided an excellent example when he wrote a post complaining about liberals writing books that actually called conservatives liars in the titles.

The books were Al Franken, Lies and the Lying Liars Who Tell Them,  Joe Conason's Big Lies and David Corn's The Lies of George W. Bush. Of these three, Conason was something of a pariah (Shafer dismissed him as a Clinton apologist) and Franken was clearly a journalistic outsider. Corn, on the other hand, was very much an insider in the Washington press corp (Shafer even described him as a friend in the post).

Under these circumstances, it's not surprising that Shafer finds a way to shield Corn from much of the blast.
This criticism applies more to Franken and Conason than it does Corn—you can't expect a book about Bush's lies to also be about Clinton's lies. And Corn acknowledges in his intro that Bush isn't the first White House liar and that Clinton lied, too. 
Of course, you could easily make a similar but more persuasive argument in Franken's behalf. Lies was largely focused on the relationship between the GOP and conservative media and since the book was published in 2003 when there was no Air America and MSNBC was just starting to experiment with liberal programming, there was no way to provide similar examples on the left.  Just to be clear, I'm not making that argument; I'm only saying that it's just as viable as the one makes for Corn.

For an even more dramatic bit of paired data, consider two obituaries Shafer wrote, separated by only a few months. The first was for Walter Annenberg, best known as a philanthropist and founder of TV Guide. The second was for Michael Kelly, journalist and former editor of the New Republic. Once again there's a clear hierarchical distance between the subjects: Annenberg, though decades earlier a power in publishing and to his death a major force in philanthropy, was not a journalistic insider; Kelly, on the other hand was about as inside as you can get.

As you've probably guessed by now, Shafer's approach to these two obituaries differs sharply. Though they don't fully capture the difference, the epitaphs give a good indication of the respective tones:

Michael Kelly: "Husband. Father. Journalist"

Walter Annenberg: "Billionaire Son of Mobster, Enemy of Journalism, and Nixon Toady Exits for Hell—Forced To Leave Picassos and van Goghs at Metropolitan Museum."

The contrast is sharpest when Shafer addresses journalistic scandals and cozy relationships with controversial right wing politicians, areas where there are definite parallels between the two men. Shafer actually explains away the New Republic/Glass scandal as an instance of Kelly being too loyal for his own good.

Shafer often judges figures on the periphery of the journalistic establishment based on a much higher standard than "Plagiarism, willful distortion, pattern of significant errors, bribe-taking." For someone like Larry King, a few disputable errors and minor discrepancies (such as changing the date of an incident from 1972 to 1971 when retelling an anecdote) merit an entire column. (It's worth noting that this column ran in the middle of 2009, a period when the coverage of politics, the economy and the European crisis were raising all sorts of journalistic questions, questions that didn't get a lot of space in Shafer's column. This raises the issue of trivialism in media criticism -- see On the Media for a myriad of examples -- but that's a topic for another thread.)

If marginal figures committing minor offenses are treated harshly by Shafer, what happens when someone at the top of the hierarchy does something that Shafer normally considers a serious offense like plagiarism? We got an answer to that one when Maureen Dowd was caught lifting a passage from Josh Marshall.

Here's her explanation in Bloggasm:

“i was talking to a friend of mine Friday about what I was writing who suggested I make this point, expressing it in a cogent — and I assumed spontaneous — way and I wanted to weave the idea into my column. but, clearly, my friend must have read josh marshall without mentioning that to me. we’re fixing it on the web, to give josh credit, and will include a note, as well as a formal correction tomorrow.”
And here Shafer explains why it's not so bad:
1. She responded promptly to the charge of plagiarism when confronted by the Huffington Post and Politico. (Many plagiarists go into hiding or deny getting material from other sources.)

2. She and her paper quickly amended her column and published a correction (although the correction is a little soft for my taste).

3. Her explanation of how the plagiarism happened seems plausible—if a tad incomplete.

4. She's not yet used the explanation as an excuse, nor has she said it's "time to move on."

5. She's not yet protested that her lifting wasn't plagiarism.

6. She's taking her lumps and not whining about it.
And here was my response at the time:
1. 'Responded.' Not to be confused with 'confessed,' 'owned up,' 'took responsibility,' or any phrase that uses a form of the word 'plagiarism.'
2. "[A] little soft"?
3. Yeah, near verbatim quotes make it through convoluted processes all the time.
4. "[M]y friend must have read josh marshall without mentioning that to me." -- What exactly would an excuse look like?
5. No, she just implied it wasn't plagiarism. That definitely gives her the moral high ground.
6. What a trooper.
(I apologize for the tone. I was in a snarky phase, but I'm trying to play nicer these days.)

I've spent a lot of time on Shafer because he's a good example,  I was familiar with his work and, as a media critic, he has an important role in journalism's self-correction process, but he's is not an isolated case, nor is he the worst of bunch (particularly not since the rise of Politico).

The point of all this is that journalism has a problem with tribalism and other social dynamics. These things are affecting objectivity, credibility and quality. What's worse, journalists seem to have so internalized the underlying mindset to such a degree that most of them don't even realize what's going on.

When the data runs contrary to what people claim

This is Joseph.

We all know that correlation is not causation.  But one theme that Mark likes to bring up is that when the data on an association shows the reverse of the claimed causal mechanism there is a high burden of explanation.  One very good example was brought up by Noah Smith.  There is often a causal assumption that big government impedes economic growth.  The problem?  Rich countries appear to have big governments:
Are we supposed to believe that rich countries are rich in spite of the fact that they all have big governments? Should we believe that government is a parasite that always, without fail, finds a host in the body politic of every single country that reaches first-world status?

Or should we conclude that big government is a necessary ingredient for countries to get rich?
 The plausible mechanisms for this vary -- including the not trivial point that losing wars is bad for a country (just ask the Huron Indians, if you can find any).  But it is not a small point that the empirical relation is the opposite of what proponents of small government propose (nobody says "let's shrink the government to reduce the level of prosperity in our country and give other countries a chance to experience being rich instead"). 

What also bothers me about this discussion is that the discussion on the size of government is mostly orthogonal to the discussion about the efficiency of government.  No matter what size government happens to be, I want it to be an efficient use of resources.  I am more worried by the political burden of improving infrastructure created by things like environmental review of bike lanes.  That is an inefficient process that costs extra resources and it doesn't matter whether we spend a lot of money on infrastructure or just a little -- it is just a bad way to do things.  I have the same opinion of universal health care (say the Canadian or the British approach).  It is not that I hate markets but that there are compelling theoretical efficiency advantages to having the state handle this form of risk pooling.  Heck, there is evidence that workers still covered by their parent's insurance (thank you ACA) are able to be more flexible employees.  That is an efficiency gain for employers like Uber and potentially an important one as we transition into a more flexible work force. 

As for what the real issues are behind the push for small government, well, that is complex and we'll need to wait for a follow-up post. 

Saturday, May 30, 2015

It is a legitimately good sign when a transportation story uses the term “maximum throughput”

Tom Vanderbilt writing for the Washington Post:
But as good as fast-moving roads might be for the individual driver, they are not the best for the most drivers. As data gleaned from in-pavement “loop detectors” on Washington state highways showed, those highways were able to achieve “maximum throughput” — pushing the most cars through one segment of road in a given time — at speeds that were roughly 80 percent of the posted speed limit of 60 mph. Why? At higher speeds, drivers need to allow more “headway” between vehicles, meaning more space is required per vehicle. And faster-moving traffic tends to break down more quickly, with more severe “shock waves”; it takes a lot longer to recover from a traffic jam than to get into one. I have been told, anecdotally, by traffic engineers that the left-hand “passing lane” can become congested first. (I’ll leave it to you to decide if karmic justice is at work there.)

In a “speed harmonization” experiment on Colorado’s I-70, the state highway patrol was able to improve traffic flow by enforcing, via phalanxes of patrol cars, 55 mph speeds — thus preventing the instability caused by people driving fast into packs of congested, slower-moving vehicles.
This goes along with an idea I had about having a fleet of driverless cars using real time data to selectively slow traffic in some or all lanes for the purpose of reducing the accordion effect. If nothing else it would be an interesting project to model.

Thanks to Mike for the link.

Friday, May 29, 2015

A statistician walks into a grocery store... repeatedly




Things have gotten busy over at the food blog. The plan was always to go hot and heavy for a month or two. With almost one hundred posts either up or in the queue, I think we're getting there.

The content is a mix of new pieces (like the recent one on chickenless chicken and black bean soup), reposts, links to recent news of interest...





And lots of historical pieces. Some, like this WWII film (highly recommended for anyone interested in the history of nutrition), focus on how we think about food.



Others (Frankenberry, Count Chocula and the late Forry Ackerman) focus on how we have marketed it.



Check it out if you get a chance and please share anything you think is deserving. We've had very little progress on the SEO front.




Adventures in Ithuvania

There's a wonderful Far Side cartoon that shows two scientists addressing a man sitting behind a desk in a sumptuous office. The lead scientist says:

"Sorry, your highness, but you're really not the dictator of Ithuvania, a small European republic. In fact, there is no Ithuvania. The hordes of admirers, the military parades, this office -- we faked it all as in experiment in human psychology. In fact, you highness, your real name is Edward Belcher, you're from Long Island, New York, and it's time to go home, Eddie."

Sometimes, when I come across yet another bit of jaw-dropping flakiness from some tech-bubble billionaire, my thoughts turn to Ithuvania. What if this were an experiment? What if some well-funded research organization decided to see what would happen if it randomly selected individuals of average intelligence, handed them huge checks and told them they were super-geniuses?

I'm not saying that's what happened; I'm just saying the results would have been awfully damned similar.

From Wired:

THE SEASTEADING INSTITUTE was the toast of tech entrepreneurs when it received financial backing from venture capitalist Peter Thiel in 2008. Its mission was to build a manmade island nation where inventors could work free of heavy-handed government interference. One early rendering shows an island raised on concrete stilts in eerily calm waters. The buildings atop the platform resemble nothing so much as the swanky tech campus of an entrepreneur’s ultimate dream: No sign of land or civilization in sight. The island, despite appearing strapped for square footage, has room for a full-size swimming pool with deck lounges.

In a 2009 essay, Thiel described these island paradises as a potential “escape from politics in all its forms.” It wasn’t just desirable, he said. It seemed possible. “We may have reached the stage at which it is economically feasible, or where it will soon be feasible,” he wrote.

More than a half-decade later, the dream has yet to be realized. And optimism is starting to waver. Earlier this year, during a talk at George Mason University, Thiel said, “I’m not exactly sure that I’m going to succeed in building a libertarian utopia any time soon.” Part of the problem: A truly self-sufficient society might exceed the range even of Thiel’s fortune. “You need to have a version where you could get started with a budget of less than $50 billion,” he said.

For its part, The Seasteading Institute has also come to appreciate that the middle of the ocean is less inviting than early renderings suggest. It now hopes to find shelter in calmer, government-regulated waters. According to its most recent vision statement, “The high cost of open ocean engineering serves as a large barrier to entry and hinders entrepreneurship in international waters. This has led us to look for cost-reducing solutions within the territorial waters of a host nation.”

Thiel’s reassessment marks a clear departure from tech culture’s unflinching confidence in its ability to self-govern. In recent years a number of prominent entrepreneurs have urged Silicon Valley to create a less inhibited place for its work. Larry Page called on technologists to “set aside a small part of the world” to test new ideas. Elon Musk has aimed at colonizing Mars. And venture capitalist Tim Draper made a proposal to divide Silicon Valley into its own state. But aside from the continued growth of full-service tech campuses such as Google’s and Facebook’s, very little has been accomplished in the way of true societal independence.

Building a government, it turns out, is a more complex challenge than much of Silicon Valley would have you believe. Now, Thiel and other high-profile Silicon Valley investors are carefully taking stock of the anti-government view they helped popularize. For all Thiel’s open criticism of elected officials, he sounded remarkably like a politician recanting false promises on the stage at George Mason. Toward the end of the talk, he reflected for a moment on his early essay on seasteading. “Writing is always such a dangerous thing,” he said. “It was late at night. I quickly typed it off.”


Thursday, May 28, 2015

The actual future of the driverless vehicle

I've been fairly skeptical of the likelihood of seeing driverless cars in the near future, partially because the economic case was weak, but primarily because the data infrastructure was prohibitively expensive, at least if you wanted a car that could go anywhere (and that flexibility is one of the main reasons for having a car).

Driverless trucks are another story. There you have vehicles with significant labor costs that mostly travel a relatively small number of routes.  Though Daimler is obviously downplaying the possibility, I would not be at all surprised to hear that fully autonomous trucks will be operating in at least a few countries in the next decade.

From NPR.
Automaker Daimler unveiled a truck last week that drives itself, called the Freightliner Inspiration. But the truck is not yet entirely autonomous.

"You still have the driver in the driver's seat, ready to take over at all times in case something goes wrong or the system encounters a situation it's not really well prepared for," says Alex Davies, associate editor for Wired, who was at the demonstration and rode in the big rig.

The driver controls the rig on surface roads, but on the highway, engages the autopilot mode. Cameras detect the lane lines to keep the truck headed in the right direction, Davies tells NPR's Rachel Martin.

"Then from there on, the truck will stay in its lane, maintain its speed and a safe distance from other vehicles," he says. "But you still need to be in the seat, ready to take over."

And being ready to take over means the driver can't exactly take a nap.

When it's time for the driver to take over — at the correct exit or if bad weather hits — the truck alerts the driver with a beep. If the driver doesn't respond, the truck slows and eventually comes to a complete stop, Davies says.

Daimler says the Inspiration, the first self-driving truck licensed to roll on public roads — highways and interstates in Nevada — is the future of trucking and may hit the market before autonomous cars, according to the Associated Press. Drivers will still be human, but might be called "logistics managers."

"The human brain is still the best computer money can buy," said Daimler Trucks North America LLC CEO Martin Daum.

Davies says no automaker will ever use the term "driverless" for a vehicle, preferring the safer-sounding "autonomous" or, in the case of the Freightliner Inspiration, "piloted."

Wednesday, May 27, 2015

"Inequality and Urbanism"

When time permits, I want to get back to this.

From Paul Krugman:
First, when it comes to things that make urban life better or worse, there is absolutely no reason to have faith in the invisible hand of the market. External economies are everywhere in an urban environment. After all, external economies — the perceived payoff to being near other people engaged in activities that generate positive spillovers — is the reason cities exist in the first place. And this in turn means that market values can very easily produce destructive incentives. When, say, a bank branch takes over the space formerly occupied by a beloved neighborhood shop, everyone may be maximizing returns, yet the disappearance of that shop may lead to a decline in foot traffic, contribute to the exodus of a few families and their replacement by young bankers who are never home, and so on in a way that reduces the whole neighborhood’s attractiveness.

On the other hand, however, an influx of well-paid yuppies can help support the essential infrastructure of hipster coffee shops (you can never have too many hipster coffee shops), ethnic restaurants, and dry cleaners, and help make the neighborhood better for everyone.

What does history tell us? Politically, I’d like to say that inequality is bad for urbanism. That’s far from obvious, however. Jane Jacobs wrote The Death and Life of Great American Cities right in the middle of the great postwar expansion, an era of widely shared economic growth, relatively equal income distribution, empowered labor — and collapsing urban life, as white families fled the cities and a combination of highway building and urban renewal destroyed many neighborhoods.

And when a partial urban revival began, it was arguably associated with forces driven by or associated with rising inequality. Affluent types in search of a bit of cool — probably 5 percenters rather than 1 percenters, and more or less David Brooks’s Bobos (bourgeois bohemians) drove gentrification and revival in urban cores; in New York, at least, large number of poorly paid but striving immigrants drove the revival of outer borough neighborhoods like Jackson Heights or Brighton Beach.

"...pretty soon you're talking real money"

Every now and then, it is useful to step back and remind ourselves just how much money is changing hands due to the education reform movement.
For the first time in three decades, a new company is poised to develop and administer the state-required exams Texas students begin taking in the third grade.

The state is in negotiations with Educational Testing Service, or ETS, to take over the bulk of the four-year, $340 million student assessment contract, the Texas Education Agency announced Monday. Company Vice President John Oswald said ETS is "privileged and honored" to land the work. Final contracts are still being negotiated.

The London-based Pearson Education has held the state's largest education-related contract — most recently, a five-year, $468 million deal to provide state exams through 2015 — since Texas began requiring state student assessments in the 1980s. Under the new agreement, the company would still develop the state's assessments designed for special needs and foreign students. That portion of the contract is worth about $60 million.  
Of course, as Diane Ravitch points out, Texas doesn't seem to have cut itself the best deal.
Here is the puzzling question: Why did it cost $468 million for a five-year contract with Pearson when New York State pays Pearson “only” $32 million for a five-year contract? Does New York have smarter negotiators? Does Pearson have better lobbyists in Texas than in New York? Does New York get Texas’s used questions? True, Texas has more children than New York, but not 15 times more. Can anyone explain?

Tuesday, May 26, 2015

Three from our war on science correspondent...

... Mike the Mad Biologist.

Justin Pidot writing for Slate.
Imagine visiting Yellowstone this summer. You wake up before dawn to take a picture of the sunrise over the mists emanating from Yellowstone hot springs. A thunderhead towers above the rising sun, and the picture turns out beautifully. You submit the photo to a contest sponsored by the National Weather Service. Under a statute signed into law by the Wyoming governor this spring, you have just committed a crime and could face up to one year in prison.

Wyoming doesn’t, of course, care about pictures of geysers or photo competitions. But photos are a type of data, and the new law makes it a crime to gather data about the condition of the environment across most of the state if you plan to share that data with the state or federal government. The reason? The state wants to conceal the fact that many of its streams are contaminated by E. coli bacteria, strains of which can cause serious health problems, even death. A small organization called Western Watersheds Project (which I represent pro bono in an unrelated lawsuit) has found the bacteria in a number of streams crossing federal land in concentrations that violate water quality standards under the federal Clean Water Act. Rather than engaging in an honest public debate about the cause or extent of the problem, Wyoming prefers to pretend the problem doesn’t exist. And under the new law, the state threatens anyone who would challenge that belief by producing information to the contrary with a term in jail.


John Timmer writing for Ars Technica
Yesterday, by a party-line vote, Republicans in the House Committee on Science, Space, and Technology approved a budget authorization for NASA that would see continued spending on Orion and the Space Launch System but slash the agency's budget for Earth sciences. This vote follows the committee's decision to cut the NSF's geoscience budget and comes after a prominent attack on NASA's Earth sciences work during a Senate hearing, all of which suggests a concerted campaign against the researchers who, among other things, are telling us that climate change is a reality.


Emily DeMarco writing for Science Insider.
Representative Louie Gohmert (R–TX) is worried that scientists employed by the U.S. government have been running roughshod over the rights of Americans in pursuit of their personal political goals. So this week Gohmert, the chair of the oversight and investigations subpanel of the U.S. House of Representatives’ Natural Resources Committee, held a hearing to explore “the consequences of politically driven science.” Notably absent, however, were any scientists, including those alleged to have gone astray.

“The purpose of this hearing is to hear from real people, mammals called human beings that have been harmed by the federal government,” Gohmert said in opening the 29 April hearing, which featured testimony from three Republican-called witnesses on alleged misdeeds by researchers with the U.S. Fish and Wildlife Service (FWS) and National Park Service (NPS).
I'm trying to cut back on mean and snarky comments so

Monday, May 25, 2015

Really cheap chickenless chicken and black-bean soup

I've got another post at the food blog on how you can actually live on a food stamp budget.


The New York Times has a class problem -- "Poor Little Rich Women"

Actually, it's not just a NYT problem -- it extends throughout the media -- but it has gotten particularly embarrassing with what is supposed to be our best paper. The NYT has always identified with the upper classes ("It does not soil the breakfast cloth") but recently it has come to focus on and empathize with the one percent to such a degree that the identification is distorting its journalistic judgement.

We've seen irony-free articles about how hard it is getting by on 300K and op-eds on how unfair it is when wealthy students at top prep schools have to settle for safety schools because their SATs are too low.

And now the New York elite have their own personal anthropologist in Wednesday Martin:
A social researcher works where she lands and resists the notion that any group is inherently more or less worthy of study than another. I stuck to the facts. The women I met, mainly at playgrounds, play groups and the nursery schools where I took my sons, were mostly 30-somethings with advanced degrees from prestigious universities and business schools. They were married to rich, powerful men, many of whom ran hedge or private equity funds; they often had three or four children under the age of 10; they lived west of Lexington Avenue, north of 63rd Street and south of 94th Street; and they did not work outside the home.

Instead they toiled in what the sociologist Sharon Hays calls “intensive mothering,” exhaustively enriching their children’s lives by virtually every measure, then advocating for them anxiously and sometimes ruthlessly in the linked high-stakes games of social jockeying and school admissions.

Their self-care was no less zealous or competitive. No ponytails or mom jeans here: they exercised themselves to a razor’s edge, wore expensive and exquisite outfits to school drop-off and looked a decade younger than they were. Many ran their homes (plural) like C.E.O.s.

It didn’t take long for me to realize that my background in anthropology might help me figure it all out, and that this elite tribe and its practices made for a fascinating story.
Martin's observations are "fascinating" only if you start with a strong proclivity to find the lives of the New York elite interesting and important. For the rest of us, there is little here that is notable or even all that surprising.

For example, you would expect a great deal of time and money to be spent on maintaining personal appearance given the wealth and leisure, the-see-and-be-seen culture, and, to be blunt, the fact that some of these women are married to rich older men who demanded highly restrictive prenups and who have shown no compunction about dumping spouses.

Nor does the explicit and rather cold-blooded talk of wife bonuses come as a shock to anyone. Given the culture of the industries that produce the elite of New York, it would be rather strange if the language of these industries did not make it into day-to-day life.

Martin clearly wants to imbue her subjects with an importance and dignity not shared by the millions of other stay-at-home spouses around the country. This, along with other embarrassing traits of New York Times journalism, is beautifully illustrated when she talks about her subjects as "CEOs" of households.

Anyone who follows financial reporting will have encountered the big swinging check syndrome: a questionable or downright bad business decision will be lauded as bold and brilliant because it involves a stunning amount of money. In this article, the fact that these spouses are managing household with large budgets is seen, in and of itself, as an accomplishment. (The real accomplishment is doing the opposite.) Adding to the embarrassment, all of this comes on the heels of a slue of stories about well-to-do New Yorkers who are comically bad at household finance.

As mentioned before, this is an ongoing concern. It's not just that the press tends to see the world from an increasingly homogeneous lens -- top quartile, Ivy League, Northeastern, largely white -- but that the press does not realize how different the world looks to the rest of us.

Friday, May 22, 2015

I haven't made up my mind whether Mike is onto something with the "gentry class" designation

But it's definitely an idea worth discussing.



To return to [David] Leonhardt, this is the key point:
There is a good debate about what to call families who don’t have enormous wealth but who also make much more than most Americans. There is also a good debate about whether any changes to tax policy — the background to Josh’s article — should involve sacrifices from that group.
In the course of the debate, though, let’s at least remain cleareyed about the fact that six figures of income is not the 21st-century American norm.
[I'm not much of a Leonhardt fan, but he makes some good points in the original piece. MP]
I suggest we call them the gentry. They’re not middle-class (whether it be the upper or lower reaches), since they can live very differently from (or, perhaps, better than) most of us. They can have most of the nice things. At the same time, they’re not wealthy or flat-out rich: if they don’t work, they can fall down the ladder, sometimes very quickly. Living comfortably or well with the interest on investments isn’t an option.
The reason I refer to this group as the gentry is, in part, it’s the group that’s responsible for gentrification in urban areas (no gentry, no gentrification), so it seems to fit. The other reason is to intentionally invoke the Victorian notion of the word. The gentry, whether it be a more religious, conservative style, or a more liberal, less traditional style, definitely has specific class interests (boldface mine):
Yet the upper middle class collectively wields far more influence. These are households with enough money to make modest political contributions, enough time to email their elected officials and to sign petitions, and enough influence to sway their neighbors. Upper-middle-class Americansvote at substantially higher rates than those less well-off, and though their turnout levels aren’t quite as high as those even richer than they are, there are far more upper-middle-class people than there are rich people….
Another thing that separates the upper middle class from the truly wealthy is that even though they’re comfortable, they’re less able to take the threat of tax increases or benefit cuts in stride. Take away the mortgage interest deduction from a Koch brother and he’ll barely notice. Take it away from a two-earner couple living in an expensive suburb and you’ll have a fight on your hands. So the upper middle class often uses its political muscle to foil the fondest wishes of egalitarian liberals.
…part of my objection is that upper-middle-income voters only oppose tax hikes on themselves. They are generally fine with raising taxes on people richer than themselves, including taxes on the investments that rich people make in new products, services, and businesses. I find that both annoyingly self-serving and destructive. The bigger reason, however, is that upper-middle-class people don’t just use their political muscle to keep their taxes low. They also use it to make life more expensive for everyone else.
…You’d almost get the impression that while working- and lower-middle-class people are expected to compete, whether with the Ubers of the world or with Chinese manufacturing workers or with immigrants with modest skills, members of the upper middle class ought to be immune.
Leaving aside Salam’s conservative take on things (very different from the Mad Biologist!), he’s on to something, though I would argue what Salam calls upper-middle class is actually a gentry, with very clear economic interests. While there can be cultural and regional disagreements, en masse, they are quite coherent.
While the U.S. pretends to be a class-free society (even as U.S.-ians spend an inordinate amount of time making very subtle class distinctions), the reality is we do have classes with specific economic interests.
It’s time we realize we have a gentry.

Thursday, May 21, 2015

Fake Data in Political Science

This is Joseph.

One of the most challenging things in population research is the need to trust in the data and analysis done by other groups.  Unlike chemistry, we cannot simply replicate experiments without huge amounts of expense.  Furthermore, the population is getting less and less responsive to surveys.  In a very real sense, endlessly replicating strong and clean results is going to partially displace other research questions.  After all, people have a limited tolerance for surveys.  This goes double for high burden approaches, such as door-to-door interviews and interventions (which require trained and paid field agents to conduct the survey with a high degree of professionalism and often limited wages).

This need to trust, where possible, makes stories like this one painful.  Full respect to the field that the problem was detected and I am glad that academia was self correcting.  But these actions have pretty strong consequences

I also think there is a very important difference between a technical error, misunderstanding of data, and completely making data up.  The first two are the cases that give every analyst nightmares.  But the last seems to have no excuses at all -- how could somebody not know that they were faking data? 

That said, it's not like medicine is innocent (as Thomas Lumley points out) and medical research probably has a lot more direct potential to cause harm (as patient concerns about this "treatment is not working" will be dismissed in the face of "randomized controlled trial" "evidence"). 

EDIT: and how could I overlook Andrew Gelman's take on this (which is right in his area)

Blast from the past -- The curse of large numbers and the real problem with p-values

Following up on Joseph's recent piece.

[Originally posted MARCH 22, 2010]

(Some final thoughts on statistical significance)

The real problem with p-values isn't just that people want it to do something that it can't do; they want it to do something that no single number can ever do, fully describe the quality and reliability of an experiment or study. This simply isn't one of those mathematical beasts that can be reduced to a scalar. If you try then sooner or later you will inevitably run into a situation where you get the same metric for two tests of widely different quality.

Which leads me to the curse of large numbers. Those you who are familiar with statistics (i.e. pretty much everybody who reads this blog) might want to skip the next paragraph because this goes all the way back to stat 101.

Let's take simplest case we can. You want to show that the mean of some group is positive so you take a random sample and calculate the probability of getting the results you saw or something more extreme (the probability of getting exactly results you saw is pretty much zero) working under the assumption that the mean of the group was actually zero. This works because the bigger the samples you take the more the means of those samples will tend to follow a nice smooth bell curve and the closer those means will tend to group around the mean of the group you're sampling from.

(For any teachers out there, a good way of introducing the central limit theorem is to have students simulate coin flips with Excel then make histograms based on various sample sizes.)

You might think of sampling error as the average difference between the mean of the group you're interested in and the mean of the samples you take from it (that's not exactly what it means but it's close) . The bigger the sample the smaller you expect that error to be which makes sense. If you picked three people at random, you might get three tall people or three millionaires, but if you pick twenty people at random, the chances of getting twenty tall people or twenty millionaires is virtually are next to nothing.

The trouble is that sampling error is only one of the things a statistician has to worry about. The sampled population might not reflect the population you want to draw inferences about. Your sample might not be random. Data may not be accurately entered. There may be problems with aliasing and confounding. Independence assumptions may be violated. With respect to sample size, the biases associated with these problems are all fixed quantities. A big sample does absolutely nothing to address them.

There's an old joke about a statistician who wakes up to find his room on fire, says to himself "I need more observations" and goes back to sleep. We do spend a lot of our time pushing for more data (and, some would say, whining about not having enough), but we do that not because small sample sizes are the root of all of our problems but because they are the easiest problem to fix.

Of course "fix" as used here is an asymptotic concept and the asymptote is not zero. Even an infinite sample wouldn't result in a perfect study; you would still be left with all of the flaws and biases that are an inevitable part of all research no matter how well thought out and executed it may be.

This is a particular concern for the corporate statistician who often encounters the combination of large samples and low quality data. It's not unusual to see analyses done on tens or even hundreds of thousands of sales or customer records and more often than not, when the results are presented someone will point to the nano-scale p-value as an indication of the quality and reliability of the findings.

As far as I know, no one reviewing for a serious journal would think that p<0 .001="" 99.9="" a="" almost="" an="" analytic="" background="" br="" but="" conclusion="" everyone="" is="" means="" re="" s="" sure="" that="" thinks.="" true="" we="" what="" without="">
And that is a problem.

Wednesday, May 20, 2015

Very good statistics post (p-value edition)

This is Joseph (and I located the piece through Economist's View)

This article on p-values was a very interesting read.  The author (who teaches statistics) has a very nice discussion of p-values:
One reason for this, I think, is that we fail to teach well how, with enough data, any non-zero parameter or difference becomes statistically significant at arbitrarily small levels. The proverbial expression of this, due I believe to Andy Gelman, is that "the p-value is a measure of sample size". More exactly, a p-value generally runs together the size of the parameter, how well we can estimate the parameter, and the sample size. The p-value reflects how much information the data has about the parameter, and we can think of "information" as the product of sample size and precision (in the sense of inverse variance) of estimation, say n/σ2. In some cases, this heuristic is actually exactly right, and what I just called "information" really is the Fisher information.
But I found this way of talking about p-values to be extremely useful, and something that should be kept in mind in Epidemiology -- where a significant association estimates from a big sample with a small effect can often be uninteresting.  You never reduce bias to zero in a real observational study and interventions rarely remove an association entirely (as not everyone changes behavior or mitigation is partial).  In the era of big data, this becomes important.

Fun stuff.  

Double-talk and Mathiness

If you haven't been following the reaction to Paul Romer's Mathiness paper, you should check it out. Here's a key quote:

Academic politics, like any other type of politics, is better served by words that are evocative and ambiguous, but if an argument is transparently political, economists interested in science will simply ignore it. The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content
Brad DeLong has had some sharp comments, but I think the best reply to Romer's critics came from Romer himself.

If you think that what McGrattan and Prescott do for location is even remotely on the same level as what Solow did for capital or what Becker did for human capital, please go read the two M-P papers (JET 2009, AER 2010.)
When you think you are too stupid to understand what they are saying and want to give up, trust me, it isn’t you. What they are saying makes no sense. No one can understand it. The authors do not understand it.
Here is a sample of what you can expect:
Technology capital is distinguished from other types of capital in that a firm can use it simultaneously in multiple domestic and foreign locations.  (Footnote: In the language of classical general equilibrium theory, a unit of technology capital is a set of technologies, with one technology for each location.) (JET 2009, p. 2455)
“A unit … is a set”? This is just gibberish. Forget about whether the model connects in any meaningful way to the real world. There is no way to make sense of this statement even in the make-believe world of the model. In the model, the authors define technology capital is a cardinal measure. It is supposed to be something that you can have 2 units of, or 4, or 10. What could 2 or 4 or 10 sets of technologies possibly mean?
We assume that the measure of a country’s production locations is proportional to its population, since locations correspond to markets and some measure of people defines a market. (JET 2009 p. 2461)
I feel guilty pulling a quote like this one, as if I’m humiliating some miserable undergraduate by reading to the class from a term paper on a project that fell apart. But remember, this is from an article that was published in the Journal of Economic Theory.
As you read this quote, remember that the motivation for the theory is that for these authors, perfect competition is the ultimate non-negotiable, more sacred even than micro-foundations. If this were a Hotelling model of location or a Krugman model of spatial location, I’d have some way to try to make sense about how “some measure of people defines a market.” But in the formal mathematical model of perfect competition that the authors are using, this sentence means nothing.
These words are untethered, undisciplined by logic or math, chosen to sound plausible enough to someone who is not paying close attention, like the set up for an applause line in a speech by a politician. This is mathiness.
There is lots more:
One unit of technology capital and z units of the composite input at a given location produce y = g(z). Consider the case of brand equity with units of technology capital indexed by m. For ease of exposition, assume for now that m is discrete and that m = 1 is the Wal-Mart brand, m = 2 is the Home Depot brand, and so on. Wal-Mart chooses the locations in which to set up stores and use its brand. It may be the case that both Wal-Mart and Home Depot have stores at the same location. (AER 2010, p. 1497.)
And if you look at the math, a company like Wal-Mart has to use one unit of technology capital for each location. Because the number of locations in the US is the US population, Wal-Mart must be using more than 300 million units of technology capital. (So more than 300 million technology sets?)
How can we reconcile the math with words that say Wal-Mart gets index m=1 and Home Depot gets m=2? And if technology capital is brand equity, why does Wal-Mart need another unit of brand equity for each US citizen/location? I haven’t a clue, but neither do the authors. One of the things that Milton Friedman got right was his observation that “confused writing is a sign of confused thinking.”
As a discussant, I put serious effort into trying to clean up the mess in the working paper that became the 2009 JET paper. I worked through the math. I talked with the authors.
The things I explained, such as how to convert any concave function like g(z) into a function with one additional variable that is homogeneous of degree one, just helped them put lipstick on this pig.
It was an embarrassment for me that the 2007 NBER version contained the acknowledgement “Discussions with Robert Lucas and Paul Romer were extremely helpful…”
One thing has been bothering me for years now is not just that economists often combine overly simplistic modeling assumptions with overly complicated math (we all do that from time to time), but that many seem to equate these things with thinking scientifically.

On the mathematical point, go back and look at “is economics a science?” debates and see how long it takes someone on the pro side to bring up how mathematically sophisticated their work is. You will frequently hear this same argument made by astrologers and other fringe science types and it has no less validity there than it does here.

This is not to say that economics is a fringe or even a soft science. While I'm not going to get into the relative hardness of different disciplines (always a waste of time), I will say that econ is a fascinating field full of smart people doing sound and important work.

My concerns are more cultural and meta. I'll try to come back and fill in some of the details later (I've got a lot on my plate now, if you'll pardon the pun), but just to paint it in broad strokes, these attitudes not only make economics look bad, they lead to all sorts of bad things like epicyclic modeling and, in the final stages, mathiness.

Which brings us to the double talk of Sid Caesar, the famous bits where he would seem to be speaking in various languages. The words were complete gibberish but by mimicking the inflection and rhythm (what he called the 'song') of each language, he could could create the impression of going from French to German to Russian and so on.

I realize Romer is making some bigger points here, but in at least one aspect, this reminds me of those Caesar routines. Instead of musical qualities, mathiness uses linguistic properties like vocabulary and syntax to create the impression of scientific reasoning. It is only when you pay attention that you realize nothing is being said.

Tuesday, May 19, 2015

The value(s) of a dollar

Joseph's recent post reminded me that there was an important point I should have emphasized more in the ongoing food and food security threads.

One of the main reasons social safety nets work is that, while the nominal value of money remains the same, the impact value (the difference in quality of life that a dollar makes) varies greatly from the top to the bottom of the income scale. For example, if you were in the top quartile, there is a very good chance that you would not even notice a $20 or $30 increase in your weekly food budget, while if you were in the bottom quartile, that same $20 or $30 could make a world of difference in terms of health and hunger. This is why a relatively small level of income redistribution can produce a substantial increase in aggregate quality of life.

This is also why the food stamp challenge and other similar initiatives can be so dangerous. The trouble with high-profile, upper-class activists trying to show how difficult it is to live on a program like food stamps is that these activists have fully internalized these upper-class perceptions of the impact value of money. They don't have the skill set to work with these small sums and they don't have the world view that allows them to understand their relative value.

Though the intentions are unquestionably good, the effect of these initiatives is to suggest incorrectly that these social safety net programs do little good and that incorrect belief can do a great deal of harm.

Notes on today's election

From KPCC (presented without comment):
The biggest spender in the LAUSD school board election is the California Charter Schools Association Advocates' PACs, which so far has spent $2.2 million. Their funds primarily support charter school executive Ref Rodriguez in the east Los Angeles' District 5 race and against incumbent Bennett Kayser, a charter school opponent.

"We are supportive of candidates who have got a vision of great public schools in Los Angeles, and we would like charters to be part of that solutions," said Gary Borden, executive director of CCSA Advocates.

Last year, the charter school PAC received donations from local philanthropist Eli Broad, Netflix CEO Reed Hastings and former New York City Mayor Michael Bloomberg.

The names of more recent donors aren't required to be disclosed just yet: California requires these types of PACs to publish contributors on a semi-annual  basis, often falling after Election Day.


Monday, May 18, 2015

Double Talk

Believe it or not, I am going to connect this to one of our threads.










From the Wikipedia page on Sid Caesar
Max and Ida Caesar ran a restaurant, a 24-hour luncheonette. By waiting on tables, their son learned to mimic the patois, rhythm and accents of the diverse clientele, a technique he termed double-talk, which he used throughout his career. He first tried double-talk with a group of Italians, his head barely reaching above the table. They enjoyed it so much that they sent him over to a group of Poles to repeat his native-sounding patter in Polish, and so on with Russians, Hungarians, Frenchmen, Spaniards, Lithuanians, and Bulgarians.
...
Of his double-talk routines, Carl Reiner said, "His ability to doubletalk every language known to man was impeccable," and during one performance Caesar imitated four different languages but with almost no real words. Despite his apparent fluency in many languages, Caesar could actually speak only English and Yiddish. In 2008, Caesar told a USA Today reporter, "Every language has its own music ... If you listen to a language for 15 minutes, you know the rhythm and song." Having developed this mimicry skill, he could create entire monologues using gibberish in numerous languages, as he did in a skit in which he played a German general.


Another post at the food blog

On nutritional requirements and on why people who write about food so often don't bother to research those requirements.

Why are these food stamp challengers so miserable? Because they don't know about Wikipedia.