Monday, June 8, 2015

What happens when someone actually tests one of Nathan Myhrvold's culinary tips?

Myhrvold's wine tip doesn't appear to hold up.

From over at the food blog,

Privacy in and out of the classroom

This is Joseph.

From Dean Dad's comments section on a discussion of a taped conversation at Kennesaw showing an advisor acting poorly towards the student:
You will have a hard time getting and keeping good employees if you always assume that a selected video clip like that one indicates a pattern of behavior rather than a response to repeated misbehavior that even extensive training cannot always eliminate. As a man, you might not be aware of how often women on a campus are disrespected by male students when they do the same things that men do in a classroom or an office.
I am not in complete agreement with this perspective.  But I do agree that it is dangerous to use a minute long video clip (that one party knew was happening and the other did not) as a proxy for the entire interaction. 

This sort of "selective sampling" is likely to become a bigger and bigger problem as the ability to record in public becomes more and more trivial.  What do you do when it's not just phones that can record but eyeglasses as well?  But the ability to edit out previous interactions and to control the timing of the recording can be a powerful tool to drive the conversation. 

Just one more point to ponder.

Saturday, June 6, 2015

Revisiting the SNAP challenge

We've already established that Trader Joe's is not where you want to shop if you have to live on a food stamp budget, but is it even possible to stave off hunger for a week with $28 worth of Trader Joe's groceries?

Yes, but just barely.

Friday, June 5, 2015

The internet has made historical revisionism so much easier

[UPDATE: Brad DeLong found an arguably more embarrassing example from the National Journal.]


This may be the best example of New York Times political reporting you will see you all day.


It started as a standard narrative journalism/puff piece. Amy Chozick and Trip Gabriel used a handful of anecdotes and a couple of well-received speeches to build a breathless account of political underdog Carly Fiorina surging toward the lead.

Hack political writers love this narrative. They also gravitate toward positive stories about candidates with whom they are comfortable. When I say "comfortable" I am talking about culture not politics. I will try to back this up in future posts, but I have long argued that left/right biases are far less common than more significant biases involving class, race, religion, region, education, etc. While the New York Times probably disagrees with most of Fiorina's politics, they are more than comfortable with almost everything else about her, from her prominent family to her CEO background to her wealth and extravagant lifestyle.

So far, all of this is just another day at the office for the New York Times election beat. Soon after the piece ran, however, people started to notice that the writers had really buried their lede. Deep in the story, it was revealed that Fiorina's surge was not quite as substantial as the headline suggested.

From paragraph 8 (as pointed out by Duncan Black):
While supporters in Iowa noted that she had doubled her standing in state polls, it was a statistically insignificant change from 1 percent to 2 percent, according to a Quinnipiac University Poll released May 6. (That may seem piddling, but the same poll had Mr. Santorum, who won the Iowa caucuses in 2012, also at 2 percent, while 5 percent supported Mr. Bush.)
It is one thing to have a paragraph in the middle of your story that completely undercuts your premise; it is quite another to have people point out a paragraph in the middle of your story that completely undercuts your premise. A quick rewrite was definitely in order.


The resulting headline doesn't make a lot of sense -- if the polls are a reflection of the state's voters, Iowans appear to be swoon-shrugging over Fiorina -- but it does partially inoculate the story from further mockery.

Of course, the NYT has standards. They don't just rewrite a published story without even acknowledging it. The original headline is right there at the bottom of the page.


In small print and pale gray letters.


Thursday, June 4, 2015

Yes, I am taking the pro side here on autonomous vehicles

In a comment on my recent relatively positive post on autonomous vehicles, Joseph points us to a Megan McArdle article that takes a different view.

While it is good to see that the conventional wisdom is starting to acknowledge some of limitations with driverless cars, I still have quite a few problems with the piece. McArdle makes some good points about the labor implications, but she does not seem to have a strong grasp of the technological or the implementation issues involved with using autonomous vehicles for long-haul trucking. We can get back to implementation later; for now let's talk about tech.

Here's McArdle:
You hear a lot about how Google cars have driven an amazing number of miles without accidents. You hear less, however, about how they have achieved this feat: by 3-D mapping every inch of those roads so that the car has a database of every stationary object, from traffic lights to guardrails. That allows the car to devote its processing power to analyzing the movement of objects that aren't in its database.

Such mapping is incredibly labor intensive, which is why, according to Lee Gomes, those amazing mile counts that Google's driverless cars are racking up "are the same few thousand mapped miles, driven over and over again." Most of them are near Google's headquarters in Mountain View, a place that gets only 15 inches of rain a year and never has snow or ice -- three common weather hazards that long-haul truckers must frequently contend with.

Just getting Google's technology to a point where we could have self-driving trucks would require mapping every inch of the nation's more than 164,000 miles worth of highways. But then what do you do with the truck? You're probably going to have to map some of the roads that connect to those highways too. And then constantly remap them, because things change all the time. You'll also have to teach the computer system what to do in a blinding snowstorm on Wolf Creek Pass. As we wrote in January, "The technology giant doesn’t intend to offer a self-driving car to areas where it snows in the near term."
McArdle makes a couple of common mistakes: assuming that, because Google dominates the coverage of driverless cars, it also dominates the research (which we'll get to later); and assuming that what is difficult for humans is difficult for robots and vice versa.

Rain and snow are problematic for us humans both because they can limit visibility and because they tend to create very complex physics problems that have to be solved in a fraction of a second. Bad weather visibility is much less of an issue with autonomous vehicles* than it is with human drivers while classical physics problems are the sort of thing that roboticists are very good at.

Along similar lines, McArdle observes [emphasis added] "But it seems like getting from there to fully automated trucks--necessarily huge, heavy, and  capable of horrific damage, with handling capabilities that change depending on the load, and a stopping distance almost twice that of a car at high speeds, will probably take a while." Yes, this will take a while, but not for the reasons McArdle imagines. Load effects and long stopping distance do make truck driving much more difficult for humans, but for computers they just represent simply another set of parameters. Furthermore, the biggest factor in real-life stopping distance is often reaction time, an area where computers have a distinct advantage.

Nor does the fair-weather testing complaint hold up. It is true that Google has largely limited its testing to clement conditions, but you certainly can't say the same for the upcoming Volvo test in, you know, Sweden.


Google's PR department has done a masterful job identifying the company with autonomous vehicles. This is not simply a matter of corporate ego. As I said earlier:
Google has a lot of reasons to want to be seen as a diversified, broadly innovative technology company, rather than as a very good one-trick pony cashing in on a monopoly (possibly two monopolies depending on how you want to count YouTube). A shiny reputation helps to keep stock prices high and regulators at bay.
It is enormously telling that McArdle cites Google ten times in her article while she doesn't mention Daimler by name and she never refers to Volvo at all.

* As far as I can tell, Daimler's prototype is doing its mapping independently in real time. While impressive, I'm sure the production models will share data and will also rely on existing maps.

Wednesday, June 3, 2015

CLEPs and MOOCs

Dean Dad makes a tremendously important point about MOOCs:

As a commenter correctly noted, there’s nothing stopping someone now from taking a MOOC in a “gen ed” area and then taking a CLEP exam to get credit.  CLEP fees are often lower than even community college tuition.  The ASU model is a more expensive and clunkier version of CLEP.  The MOOC-to-CLEP option has existed for a couple of years now, but students haven’t taken advantage in significant numbers.  
In case you're not familiar:
The College Level Examination Program (CLEP) is a group of standardized tests created and administered by College Board. These tests assess college-level knowledge in thirty-six subject areas and provide a mechanism for earning college credits without taking college courses. They are administered at more than 1,700 sites (colleges, universities, and military installations) across the United States. There are about 2,900 colleges which grant CLEP credit. Each institution awards credit to students who meet the college's minimum qualifying score for that exam, which is typically 50 to 60 out of a possible 80, but varies by site and exam. These tests are useful for individuals who have obtained knowledge outside the classroom, such as through independent study, homeschooling, job experience, or cultural interaction; and for students schooled outside the United States. They provide an opportunity to demonstrate proficiency in specific subject areas and bypass undergraduate coursework. Many take CLEP exams because of their convenience and low cost (typically $15) compared to a semester of coursework for comparable credit.

I plan to spend a lot of time this summer writing about better using CLEPs  and improving MOOCs. For now though, I want to get a couple of big points.

The Internet has a way of producing deceptively large numbers. This has certainly been true with MOOCs. Articles have breathlessly reported huge enrollments despite the fact that for online classes enrollment is an almost meaningless statistic. When we have tried to assign meaningful metrics to online classes, they have tended to do very poorly. CLEP-usage would appear to be another example.

CLEP exams are a well-established, easy, and cheap way for students to get college credit for taking online courses, but very few seem to be taking advantage of it. That's a bad sign, but it does suggest a way forward.

Now, just to be clear, I am not saying that CLEP tests are a perfect solution for this problem – – I am certain we could come up with a better system, particularly once we have some experience to build on – but for the time being these exams are probably our best option and the fact that we're not seriously exploring them indicates a deeper lack of seriousness about MOOCs.

Tired blood and Nazi superman vitamin pills

One of the reasons I enjoy going through old pop culture ephemera is the perspective it can give on the way popular thinking has evolved or, in some cases, stayed the same. Check out the topics a popular comic book superhero chose when making conversation in 1945.



Having spent a lot of time recently going through food related ads and instructional films, I've noticed a strong mid-Twentieth Century fascination with the scientific basis of nutrition. Much of this expressed itself as pseudo-science, but it was driven by a string of real breakthroughs. Remember, he very concept of a vitamin was largely a Twentieth Century discovery.

From Wikipedia:

The discovery dates of the vitamins and their sources
Year of discoveryVitaminFood source
1910Vitamin B1 (Thiamine)Rice bran
1913Vitamin A (Retinol)Cod liver oil
1920Vitamin C (Ascorbic acid)Citrus, most fresh foods
1920Vitamin D (Calciferol)Cod liver oil
1920Vitamin B2 (Riboflavin)Meatdairy productseggs
1922(Vitamin E) (Tocopherol)Wheat germ oil,
unrefined vegetable oils
1926Vitamin B12 (Cobalamins)Livereggs, animal products
1929Vitamin K1 (Phylloquinone)Leaf vegetables
1931Vitamin B5 (Pantothenic acid)Meatwhole grains,
in many foods
1931Vitamin B7 (Biotin)Meatdairy productseggs
1934Vitamin B6 (Pyridoxine)Meatdairy products
1936Vitamin B3 (Niacin)Meatgrains
1941Vitamin B9 (Folic acid)Leaf vegetables
It is easy to see how all of this got people thinking about vitamins as an almost mystical cure-all. Even instructional US govt. films included lines like "The Nazis are supposed to have a superman vitamin pill." Of course, the mystique of the vitamin combined with the marketing power of television presented huge profit potential.




After vitamins, food advertisements loved to talk about 'energy.' They were always vague about exactly what they meant by the word but they wanted you to know their products were packed with it.



Companies even promoted dextrose as healthy.



It's easy to mock, but you have to remember how fast nutritional science was advancing. We've had decades to process this information and I'm not sure we're doing that much better.

Tuesday, June 2, 2015

Dining tip – when eating out with Maureen Dowd, don't share an appetizer

Given her attitude toward people in the service industry, I would be nervous about what goes on in the kitchen.

Brad DeLong handles today's garbage collection.

Why Don't New York Times Writers Possess Any Awareness of Their Presentation-of-Self?

Before we talk about 21st Century technology...

...we need to spend some time thinking about the 20th Century technology that got us here.
A controlled-access highway provides an unhindered flow of traffic, with no traffic signals, intersections or property access. They are free of any at-grade crossings with other roads, railways, or pedestrian paths, which are instead carried by overpasses and underpasses across the highway. Entrances and exits to the highway are provided at interchanges by slip roads (ramps), which allow for speed changes between the highway and arterial roads and collector roads. On the controlled-access highway, opposing directions of travel are generally separated by a median strip or central reservation containing a traffic barrier or grass. Elimination of the sources of potential conflicts with other directions of travelers dramatically improves safety, fuel consumption, and travel times.
It turns out that many of the innovations mid-20th Century engineers came up with to maximize throughput (controlled access, divided lanes, etc.) also tend to make the jobs of today's engineers much easier when it comes to autonomous vehicles. This relationship between technologies will be a big factor when we get back to this.

Monday, June 1, 2015

Jack Shafer, poster child [repost]

You might have heard about Bruce Bartlett's recent paper on the effect that Fox News has had on the Republican Party. I don't entirely agree but it makes some interesting points about the way information flow affects politics.

The paper raises serious questions not just about Fox but about the entire journalistic ecosystem. For a long time and for a variety of reasons, major mainstream outlets like the New York Times and Politico have been reluctant to call Fox out on its most egregious lapses (these days, the news media is pretty much one long row of glass houses).

It's not surprising that Politico media apologist Jack Shafer took umbrage. If you really want to read Shafer's comments, you can find a link in this piece by Eric Boehlert, but, before you do, I would recommend that you take a few minutes to review Shafer's rather... flexible standards for journalistic quality and ethics.



More on journalistic tribalism
from WEDNESDAY, APRIL 3, 2013

Having brought up the charge in a previous post, I should probably take a minute to spell out exactly what I'm talking about. I'm using a very broad reading of the term 'tribalism' (perhaps so broad I should say something like 'tribalism and other social psych phenomena'). The traits I'm thinking of include:

1. Us/them mentality;

2. Excessive reliance on in-group social norms;

3. Deferring to and preserving hierarchies;

and as a consequence

4,   A tendency to use different standards to judge interactions based on the relative positions of the parties.

There is inevitably going to be a degree of subjectivity when deciding who goes where in the hierarchy, but I think it's fairly safe to say that Maureen Dowd and (till his death) Michael Kelly were in the innermost circle with writers like David Brooks and most prominent, established Washington and, to a lesser degree, New York journalists fairly close.

In this tribal model, it makes perfect sense that Politico would view Chris Hughes' (outsider) request for a small change in the copy of Timothy Noah (insider) as a major affront. It also explains Politico's attacks on Nate Silver (outsider) when his work started making established pundits (insiders) look bad.

The press corps's treatment of Al Gore in 2000 is another case in point. Following the lead of Dowd and Kelly and reinforced by a general dislike of the candidate, the group quickly established social norms that justified violating the most basic standards of accuracy and fairness.

The poster child for this kind of journalistic tribalism is Jack Shafer, or at least he was a few years ago when I was first experimenting with blogging. One of my main topics was the press's inability to face up to its problems and Shafer was the gift that kept on giving (I haven't read him much since). That blog is gone now but I still have my notes so here are some highlights.

Shafer was openly disdainful of readers and generally dismissive of their interests which is an extraordinary starting point for a journalism critic. Consider this passage from the aptly named "Why I Don't Trust Readers"
I'm all for higher standards, but I draw the line when journalists start getting more complaints about less serious professional lapses. Serious: Plagiarism, willful distortion, pattern of significant errors, bribe-taking. Not serious: campaign donations in the low three-figures for reporters distant from that beat; appearance of conflict of interest; a point of view; friendships with the rich and powerful.
First, notice the first item on the list. Plagiarism is certainly a serious offense, but the other serious offenses are the sort of things that can destroy people's lives, conceal crimes and enable corruption. Even more interesting is what didn't make the list: unintentional distortion due to laziness or bias; patterns of minor errors; isolated cases of serious errors due to negligence; selective reporting (as long as it doesn't rise to the level of distortion); failure to dig into important aspects of a story; cozy relationships with subjects as long as it doesn't involve the quid pro quo of a bribe.

What's important here was the victimology. In plagiarism, the primary victim is a fellow journalist. In all of these other cases, the primary victim is either the subject or the reader. Shafer was a tribalist and his main objective was almost always the defense of his tribe and its hierarchy.

There's a remarkable inverse correlation between the rank of Shafer's subjects and the harshness with which he treats them.  This is particularly apparent when different subjects of the same article have different positions. Shafer provided an excellent example when he wrote a post complaining about liberals writing books that actually called conservatives liars in the titles.

The books were Al Franken, Lies and the Lying Liars Who Tell Them,  Joe Conason's Big Lies and David Corn's The Lies of George W. Bush. Of these three, Conason was something of a pariah (Shafer dismissed him as a Clinton apologist) and Franken was clearly a journalistic outsider. Corn, on the other hand, was very much an insider in the Washington press corp (Shafer even described him as a friend in the post).

Under these circumstances, it's not surprising that Shafer finds a way to shield Corn from much of the blast.
This criticism applies more to Franken and Conason than it does Corn—you can't expect a book about Bush's lies to also be about Clinton's lies. And Corn acknowledges in his intro that Bush isn't the first White House liar and that Clinton lied, too. 
Of course, you could easily make a similar but more persuasive argument in Franken's behalf. Lies was largely focused on the relationship between the GOP and conservative media and since the book was published in 2003 when there was no Air America and MSNBC was just starting to experiment with liberal programming, there was no way to provide similar examples on the left.  Just to be clear, I'm not making that argument; I'm only saying that it's just as viable as the one makes for Corn.

For an even more dramatic bit of paired data, consider two obituaries Shafer wrote, separated by only a few months. The first was for Walter Annenberg, best known as a philanthropist and founder of TV Guide. The second was for Michael Kelly, journalist and former editor of the New Republic. Once again there's a clear hierarchical distance between the subjects: Annenberg, though decades earlier a power in publishing and to his death a major force in philanthropy, was not a journalistic insider; Kelly, on the other hand was about as inside as you can get.

As you've probably guessed by now, Shafer's approach to these two obituaries differs sharply. Though they don't fully capture the difference, the epitaphs give a good indication of the respective tones:

Michael Kelly: "Husband. Father. Journalist"

Walter Annenberg: "Billionaire Son of Mobster, Enemy of Journalism, and Nixon Toady Exits for Hell—Forced To Leave Picassos and van Goghs at Metropolitan Museum."

The contrast is sharpest when Shafer addresses journalistic scandals and cozy relationships with controversial right wing politicians, areas where there are definite parallels between the two men. Shafer actually explains away the New Republic/Glass scandal as an instance of Kelly being too loyal for his own good.

Shafer often judges figures on the periphery of the journalistic establishment based on a much higher standard than "Plagiarism, willful distortion, pattern of significant errors, bribe-taking." For someone like Larry King, a few disputable errors and minor discrepancies (such as changing the date of an incident from 1972 to 1971 when retelling an anecdote) merit an entire column. (It's worth noting that this column ran in the middle of 2009, a period when the coverage of politics, the economy and the European crisis were raising all sorts of journalistic questions, questions that didn't get a lot of space in Shafer's column. This raises the issue of trivialism in media criticism -- see On the Media for a myriad of examples -- but that's a topic for another thread.)

If marginal figures committing minor offenses are treated harshly by Shafer, what happens when someone at the top of the hierarchy does something that Shafer normally considers a serious offense like plagiarism? We got an answer to that one when Maureen Dowd was caught lifting a passage from Josh Marshall.

Here's her explanation in Bloggasm:

“i was talking to a friend of mine Friday about what I was writing who suggested I make this point, expressing it in a cogent — and I assumed spontaneous — way and I wanted to weave the idea into my column. but, clearly, my friend must have read josh marshall without mentioning that to me. we’re fixing it on the web, to give josh credit, and will include a note, as well as a formal correction tomorrow.”
And here Shafer explains why it's not so bad:
1. She responded promptly to the charge of plagiarism when confronted by the Huffington Post and Politico. (Many plagiarists go into hiding or deny getting material from other sources.)

2. She and her paper quickly amended her column and published a correction (although the correction is a little soft for my taste).

3. Her explanation of how the plagiarism happened seems plausible—if a tad incomplete.

4. She's not yet used the explanation as an excuse, nor has she said it's "time to move on."

5. She's not yet protested that her lifting wasn't plagiarism.

6. She's taking her lumps and not whining about it.
And here was my response at the time:
1. 'Responded.' Not to be confused with 'confessed,' 'owned up,' 'took responsibility,' or any phrase that uses a form of the word 'plagiarism.'
2. "[A] little soft"?
3. Yeah, near verbatim quotes make it through convoluted processes all the time.
4. "[M]y friend must have read josh marshall without mentioning that to me." -- What exactly would an excuse look like?
5. No, she just implied it wasn't plagiarism. That definitely gives her the moral high ground.
6. What a trooper.
(I apologize for the tone. I was in a snarky phase, but I'm trying to play nicer these days.)

I've spent a lot of time on Shafer because he's a good example,  I was familiar with his work and, as a media critic, he has an important role in journalism's self-correction process, but he's is not an isolated case, nor is he the worst of bunch (particularly not since the rise of Politico).

The point of all this is that journalism has a problem with tribalism and other social dynamics. These things are affecting objectivity, credibility and quality. What's worse, journalists seem to have so internalized the underlying mindset to such a degree that most of them don't even realize what's going on.

When the data runs contrary to what people claim

This is Joseph.

We all know that correlation is not causation.  But one theme that Mark likes to bring up is that when the data on an association shows the reverse of the claimed causal mechanism there is a high burden of explanation.  One very good example was brought up by Noah Smith.  There is often a causal assumption that big government impedes economic growth.  The problem?  Rich countries appear to have big governments:
Are we supposed to believe that rich countries are rich in spite of the fact that they all have big governments? Should we believe that government is a parasite that always, without fail, finds a host in the body politic of every single country that reaches first-world status?

Or should we conclude that big government is a necessary ingredient for countries to get rich?
 The plausible mechanisms for this vary -- including the not trivial point that losing wars is bad for a country (just ask the Huron Indians, if you can find any).  But it is not a small point that the empirical relation is the opposite of what proponents of small government propose (nobody says "let's shrink the government to reduce the level of prosperity in our country and give other countries a chance to experience being rich instead"). 

What also bothers me about this discussion is that the discussion on the size of government is mostly orthogonal to the discussion about the efficiency of government.  No matter what size government happens to be, I want it to be an efficient use of resources.  I am more worried by the political burden of improving infrastructure created by things like environmental review of bike lanes.  That is an inefficient process that costs extra resources and it doesn't matter whether we spend a lot of money on infrastructure or just a little -- it is just a bad way to do things.  I have the same opinion of universal health care (say the Canadian or the British approach).  It is not that I hate markets but that there are compelling theoretical efficiency advantages to having the state handle this form of risk pooling.  Heck, there is evidence that workers still covered by their parent's insurance (thank you ACA) are able to be more flexible employees.  That is an efficiency gain for employers like Uber and potentially an important one as we transition into a more flexible work force. 

As for what the real issues are behind the push for small government, well, that is complex and we'll need to wait for a follow-up post. 

Saturday, May 30, 2015

It is a legitimately good sign when a transportation story uses the term “maximum throughput”

Tom Vanderbilt writing for the Washington Post:
But as good as fast-moving roads might be for the individual driver, they are not the best for the most drivers. As data gleaned from in-pavement “loop detectors” on Washington state highways showed, those highways were able to achieve “maximum throughput” — pushing the most cars through one segment of road in a given time — at speeds that were roughly 80 percent of the posted speed limit of 60 mph. Why? At higher speeds, drivers need to allow more “headway” between vehicles, meaning more space is required per vehicle. And faster-moving traffic tends to break down more quickly, with more severe “shock waves”; it takes a lot longer to recover from a traffic jam than to get into one. I have been told, anecdotally, by traffic engineers that the left-hand “passing lane” can become congested first. (I’ll leave it to you to decide if karmic justice is at work there.)

In a “speed harmonization” experiment on Colorado’s I-70, the state highway patrol was able to improve traffic flow by enforcing, via phalanxes of patrol cars, 55 mph speeds — thus preventing the instability caused by people driving fast into packs of congested, slower-moving vehicles.
This goes along with an idea I had about having a fleet of driverless cars using real time data to selectively slow traffic in some or all lanes for the purpose of reducing the accordion effect. If nothing else it would be an interesting project to model.

Thanks to Mike for the link.

Friday, May 29, 2015

A statistician walks into a grocery store... repeatedly




Things have gotten busy over at the food blog. The plan was always to go hot and heavy for a month or two. With almost one hundred posts either up or in the queue, I think we're getting there.

The content is a mix of new pieces (like the recent one on chickenless chicken and black bean soup), reposts, links to recent news of interest...





And lots of historical pieces. Some, like this WWII film (highly recommended for anyone interested in the history of nutrition), focus on how we think about food.



Others (Frankenberry, Count Chocula and the late Forry Ackerman) focus on how we have marketed it.



Check it out if you get a chance and please share anything you think is deserving. We've had very little progress on the SEO front.




Adventures in Ithuvania

There's a wonderful Far Side cartoon that shows two scientists addressing a man sitting behind a desk in a sumptuous office. The lead scientist says:

"Sorry, your highness, but you're really not the dictator of Ithuvania, a small European republic. In fact, there is no Ithuvania. The hordes of admirers, the military parades, this office -- we faked it all as in experiment in human psychology. In fact, you highness, your real name is Edward Belcher, you're from Long Island, New York, and it's time to go home, Eddie."

Sometimes, when I come across yet another bit of jaw-dropping flakiness from some tech-bubble billionaire, my thoughts turn to Ithuvania. What if this were an experiment? What if some well-funded research organization decided to see what would happen if it randomly selected individuals of average intelligence, handed them huge checks and told them they were super-geniuses?

I'm not saying that's what happened; I'm just saying the results would have been awfully damned similar.

From Wired:

THE SEASTEADING INSTITUTE was the toast of tech entrepreneurs when it received financial backing from venture capitalist Peter Thiel in 2008. Its mission was to build a manmade island nation where inventors could work free of heavy-handed government interference. One early rendering shows an island raised on concrete stilts in eerily calm waters. The buildings atop the platform resemble nothing so much as the swanky tech campus of an entrepreneur’s ultimate dream: No sign of land or civilization in sight. The island, despite appearing strapped for square footage, has room for a full-size swimming pool with deck lounges.

In a 2009 essay, Thiel described these island paradises as a potential “escape from politics in all its forms.” It wasn’t just desirable, he said. It seemed possible. “We may have reached the stage at which it is economically feasible, or where it will soon be feasible,” he wrote.

More than a half-decade later, the dream has yet to be realized. And optimism is starting to waver. Earlier this year, during a talk at George Mason University, Thiel said, “I’m not exactly sure that I’m going to succeed in building a libertarian utopia any time soon.” Part of the problem: A truly self-sufficient society might exceed the range even of Thiel’s fortune. “You need to have a version where you could get started with a budget of less than $50 billion,” he said.

For its part, The Seasteading Institute has also come to appreciate that the middle of the ocean is less inviting than early renderings suggest. It now hopes to find shelter in calmer, government-regulated waters. According to its most recent vision statement, “The high cost of open ocean engineering serves as a large barrier to entry and hinders entrepreneurship in international waters. This has led us to look for cost-reducing solutions within the territorial waters of a host nation.”

Thiel’s reassessment marks a clear departure from tech culture’s unflinching confidence in its ability to self-govern. In recent years a number of prominent entrepreneurs have urged Silicon Valley to create a less inhibited place for its work. Larry Page called on technologists to “set aside a small part of the world” to test new ideas. Elon Musk has aimed at colonizing Mars. And venture capitalist Tim Draper made a proposal to divide Silicon Valley into its own state. But aside from the continued growth of full-service tech campuses such as Google’s and Facebook’s, very little has been accomplished in the way of true societal independence.

Building a government, it turns out, is a more complex challenge than much of Silicon Valley would have you believe. Now, Thiel and other high-profile Silicon Valley investors are carefully taking stock of the anti-government view they helped popularize. For all Thiel’s open criticism of elected officials, he sounded remarkably like a politician recanting false promises on the stage at George Mason. Toward the end of the talk, he reflected for a moment on his early essay on seasteading. “Writing is always such a dangerous thing,” he said. “It was late at night. I quickly typed it off.”


Thursday, May 28, 2015

The actual future of the driverless vehicle

I've been fairly skeptical of the likelihood of seeing driverless cars in the near future, partially because the economic case was weak, but primarily because the data infrastructure was prohibitively expensive, at least if you wanted a car that could go anywhere (and that flexibility is one of the main reasons for having a car).

Driverless trucks are another story. There you have vehicles with significant labor costs that mostly travel a relatively small number of routes.  Though Daimler is obviously downplaying the possibility, I would not be at all surprised to hear that fully autonomous trucks will be operating in at least a few countries in the next decade.

From NPR.
Automaker Daimler unveiled a truck last week that drives itself, called the Freightliner Inspiration. But the truck is not yet entirely autonomous.

"You still have the driver in the driver's seat, ready to take over at all times in case something goes wrong or the system encounters a situation it's not really well prepared for," says Alex Davies, associate editor for Wired, who was at the demonstration and rode in the big rig.

The driver controls the rig on surface roads, but on the highway, engages the autopilot mode. Cameras detect the lane lines to keep the truck headed in the right direction, Davies tells NPR's Rachel Martin.

"Then from there on, the truck will stay in its lane, maintain its speed and a safe distance from other vehicles," he says. "But you still need to be in the seat, ready to take over."

And being ready to take over means the driver can't exactly take a nap.

When it's time for the driver to take over — at the correct exit or if bad weather hits — the truck alerts the driver with a beep. If the driver doesn't respond, the truck slows and eventually comes to a complete stop, Davies says.

Daimler says the Inspiration, the first self-driving truck licensed to roll on public roads — highways and interstates in Nevada — is the future of trucking and may hit the market before autonomous cars, according to the Associated Press. Drivers will still be human, but might be called "logistics managers."

"The human brain is still the best computer money can buy," said Daimler Trucks North America LLC CEO Martin Daum.

Davies says no automaker will ever use the term "driverless" for a vehicle, preferring the safer-sounding "autonomous" or, in the case of the Freightliner Inspiration, "piloted."