Friday, August 18, 2023

Thursday Tweets -- There is no K in 'Kenya' [deferred because a certain company sucks]

Guess what. If you start a post in HTML view while embedding you tweets, then switch to compose view to edit, rearrange, and add comments, then switch back to HTML to add one more tweet, be careful not to hit control-z, because it will undo everything you did in compose and you can't control-y it back.

Let's start with the best show on Twitter... its owner trying to get out of fight club with some portion of his dignity.


 

I suspect that Elon was hoping that Zuckerberg would play along and let him off the hook, but Zuck had apparently had enough.

So Musk is in "somebody hold me back" mode.  


 In a quote tweet of this Walter Isaacson post, New York Times Pitchbot commented "One of America’s most respected journalists."

 Blogger does strange things so just in case it decides to crop the tweets image, here's the original.


Elsewhere in the world of the site formerly known as Twitter.




Seguing to another member of the PayPal mafia,


And bigger news.




Jeff Gerth is the same credulous, ignorant, ill-informed hack who brought us Whitewater. The NYT created him. It is poetic justice for them to now know how he treats his subjects.




"Shock and outrage over the fall of Roe v. Wade has faded as confusion has spread, deflating Democrats’ hopes that the issue could carry them to victory"




"Oops, I made a mistake" doesn't entirely set things right for helping bring the country to the verge of fascism.

 

Because when you're defending your decision to spend big money addressing climate change, the last thing you want people talking about is climate change.



I'm a little more bullish on RD than Frum, but it is amazing how the consensus has shifted.


Normally, it's not the defendant who has the option of 'moving on'



Basically, Dean Phillips just wants attention.


We've said before that Loeb has been feeding his considerable reputation into the woodchipper. Now he's found the best network for it.

The Internet Archive does good, important work, particularly as preservationists. I give them money. You should too.


Adventures in AI





Notes from academia.



And in closing...

2 comments:

  1. Hi, Mark:

    I'm always happy to talk about LLMs. So much so I'm tempted to to full circle and jump out of Bayesian stats and back into natural language processing.

    I'm a bit perplexed by everyone's skepticism. I understand that an LLM like ChatGPT is neither embodied nor is it learning language in the same way as a human.

    The first talk I saw Geoff Hinton give (early 90s, CMU psych dept), was about how neural networks captured the "U-shaped learning curve" where children learn to mimic, then overgeneralize, then learn exceptions. I don't think anyone's done this kind of analysis for transformers. Human children learn language with exposure to far fewer tokens of language than GPT. And they do it interactively in the real world. By the time children have acquired language, they typically control tens of thousands of words from one or two languages. By the time ChatGPT is trained, it can employ hundreds of thousands of words in hundreds of languages. It's also superhuman in that it overcomes our pesky attention limits, as seen in constructions like center embeddings. In sequences like "the mouse ran", "the mouse the cat chased ran", "the mouse the cat the dog bit chased ran", humans get overwhelemed about here, but ChatGPT merrily deconstructs the right relationships five or six deep.


    Does ChatGPT understand language? Certainly not in the same way as a person. In-context learning only goes so far and it's basically stuck at whatever it was pretrained on (the "P" in "GPT"). A simpler question is whether AlphaGo understands Go. Again, not in the same way as a person. But AlphaGo is interesting in that it came up with novel strategies from which humans learned. GPT can also generate novel moves in language in the sense that it's generating new sentences. Is it just monkeys at a typewriter? If so, they're damn lucky monkeys who act an awful lot like someone who understood the chat dialogue.

    ReplyDelete
  2. (too long to fit in one comment!). To answer the question more precisely, we have to be more precise about the semantics of the word "understand". How are you going to define it as something other than "what people do with their meat-based brains" that excludes something like ChatGPT? That's tricky and why I was urging people to check out Steven Piantadosi's tallk at the LLM workshop on Berkeley titled "Meaning in the age of large language models". He starts us down the postmodern road of undertanding language and concepts as a network of associations. The reason this is relevant is that philosophers (and to some extent cognitive scientists) spent the 20th century trying to unpack what "embodied" and "understand" and "perception" mean.

    Another way to think about this is in terms of interpolation and extrapolation. I hear a lot of "ML can only interpolate" talk, but I'm not sure what it means in high dimensions, where none of the outputs are going to land in the convex hull defined by the training data. I hear a lot of "it's a stochastic parrot", but the parrots I've seen all have a limited repetoire of fixed phrases. ChatGPT almost never says the same thing twice (unless it's trying to dodge answering---then it's annoyingly repetetive).

    Whether or not these LLMs understand according to some fine-grained definition, I'm finding them incredibly useful. I'm mainly using GPT-4 to (a) generate code for munging data and graphics in Python, (b) generating suggestions for role-playing games and generating random tables and magic item tables, (c) helping me rewrite things I'm having trouble phrasing clearly, and (d) tutoring me in math and science subjects.

    For (a), it's better at these tasks than I am---I can never remember how to draw a thick vertical line of a given color at a given x-axis position. It can even take descriptions of outputs and genrate Tikz graphics instructions in LaTeX for me or convert markdown tables to LaTeX tables or vice-versa. For (b), I'd say it's incredibly productively creative at brainstorming. Here, I don't think it comes up with ideas that are as novel as those a really talented and creative person would come up with, but it's better than me at it. It's good for technical uses when you say you're writing a textbook and ask what it thinks you should cover (at any level of granularity). For (c), the whole reason I'm using it is that I can't figure out how to say something clearly and it reorganizes everything and states it clearly. For (d), it can be downright amazing. I've used it to teach me about biases in RNA sequencing, about measure-theoretic approaches to proving MCMC convergence, and pretty much anything that comes up that I might have Googled in the past and had to read through technical articles to understand. It's also great at converting notation. I've heard a lot of reports about how much people like it as a tutor because of its patience in answering streams of "why?" questions.

    What is it not so good at? I tried designing a whole dungeon, but it needs to be constantly reminded of context. It never really "learns" in context and the attention models are only so good long distance. It's terrible at Stan code beyond a few trivial examples. When I get into edge statistical topics like the infinitesimal jackknife, it starts confusing the math and the algorithm (though its top level descriptions of what it is and why its used are spot on). I also can't coax it into writing at a really high quality level, like say a good textbook or the New Yorker (though ask it to give you the outline and plot for five Booker-prize-winning novels and you'll see that it totally gets genre vs. literature)---it just can't write the books at that level.

    ReplyDelete