Thursday, November 7, 2024
Startups

OpenAI’s new tool attempts to explain language models’ behaviors

It’s often said that large language models (LLMs) along the lines of OpenAI’s ChatGPT are a black box, and certainly, there’s some truth to that. Even for data scientists, it’s difficult to know why, always, a model responds in the way it does, like  inventing facts out of whole cloth.

In an effort to peel back the layers of LLMs, OpenAI is developing a tool to automatically identify which parts of an LLM are responsible for which of its behaviors. The engineers behind it stress that it’s in the early stages, but the code to run it is available in open source on GitHub as of this morning.

“We’re trying to [develop ways to] anticipate what the problems with an AI system will be,” William Saunders, the interpretability team manager at OpenAI, told TechCrunch in a phone interview. “We want to really be able to know that we can trust what the model is doing and the answer that it produces.”

To that end, OpenAI’s tool uses a language model (ironically) to figure out the functions of the components of other, architecturally simpler LLMs — specifically OpenAI’s own GPT-2.

OpenAI’s tool attempts to simulate the behaviors of neurons in an LLM.

How? First, a quick explainer on LLMs for background. Like the brain, they’re made up of “neurons,” which observe some specific pattern in text to influence what the overall model “says” next. For example, given a prompt about superheros (e.g. “Which superheros have the most useful superpowers?”), a “Marvel superhero neuron” might boost the probability the model names specific superheroes from Marvel movies.

OpenAI’s tool exploits this setup to break models down into their individual pieces. First, the tool runs text sequences through the model being evaluated and waits for cases where a particular neuron “activates” frequently. Next, it “shows” GPT-4, OpenAI’s latest text-generating AI model, these highly active neurons and has GPT-4 generate an explanation. To determine how accurate the explanation is, the tool provides GPT-4 with text sequences and has it predict, or simulate, how the neuron would behave. In then compares the behavior of the simulated neuron with the behavior of the actual neuron.

“Using this methodology, we can basically, for every single neuron, come up with some kind of preliminary natural language explanation for what it’s doing and also have a score for how how well that explanation matches the actual behavior,” Jeff Wu, who leads the scalable alignment team at OpenAI, said. “We’re using GPT-4 as part of the process to produce explanations of what a neuron is looking for and then score how well those explanations match the reality of what it’s doing.”

The researchers were able to generate explanations for all 307,200 neurons in GPT-2, which they compiled in a data set that’s been released alongside the tool code.

Tools like this could one day be used to improve an LLM’s performance, the researchers say — for example to cut down on bias or toxicity. But they acknowledge that it has a long way to go before it’s genuinely useful. The tool was confident in its explanations for about 1,000 of those neurons, a small fraction of the total.

A cynical person might argue, too, that the tool is essentially an advertisement for GPT-4, given that it requires GPT-4 to work. Other LLM interpretability tools are less dependent on commercial APIs, like DeepMind’s Tracr, a compiler that translates programs into neural network models.

Wu said that isn’t the case — the fact the tool uses GPT-4 is merely “incidental” — and, on the contrary, shows GPT-4’s weaknesses in this area. He also said it wasn’t created with commercial applications in mind and, in theory, could be adapted to use LLMs besides GPT-4.

OpenAI explainability

The tool identifies neurons activating across layers in the LLM.

“Most of the explanations score quite poorly  or don’t explain that much of the behavior of the actual neuron,” Wu said. “A lot of the neurons, for example, active in a way where it’s very hard to tell what’s going on — like they activate on five or six different things, but there’s no discernible pattern. Sometimes there is a discernible pattern, but GPT-4 is unable to find it.”

That’s to say nothing of more complex, newer and larger models, or models that can browse the web for information. But on that second point, Wu believes that web browsing wouldn’t change the tool’s underlying mechanisms much. It could simply be tweaked, he says, to figure out why neurons decide to make certain search engine queries or access particular websites.

“We hope that this will open up a promising avenue to address interpretability in an automated way that others can build on and contribute to,” Wu said. “The hope is that we really actually have good explanations of not just not just what neurons are responding to but overall, the behavior of these models — what kinds of circuits they’re computing and how certain neurons affect other neurons.”

OpenAI’s new tool attempts to explain language models’ behaviors by Kyle Wiggers originally published on TechCrunch

Source link

Share with your friends!

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security