Saturday, April 27, 2024
Social

Meta expects recommendation models ‘orders of magnitude’ bigger than GPT-4. Why?

Meta made a remarkable claim in an announcement published today intended to give more clarity on its content recommendation algorithms. It’s preparing for behavior analysis systems “orders of magnitude” bigger than the biggest large language models out there, including ChatGPT and GPT-4. Is that really necessary?

Every once in a while Meta decides to freshen its commitment to transparency by explaining how a few of its algorithms work. Sometimes this is revealing or informative, and sometimes it only leads to more questions. This occasion is a little of both.

In addition to the “system cards” explaining how AI is used in a given context or app, the social and advertising network posted an overview of the AI models it uses. For instance, it may be worthwhile to know whether a video represents roller hockey or roller derby, even though there’s some visual overlap, so it can be recommended properly.

Indeed Meta has been among the more prolific research organizations in the field of multimodal AI, which combines data from multiple modalities (visual and auditory, for instance) to better understand a piece of content.

Few of these models are released publicly, though we frequently hear about how they are used internally to improve things like “relevance,” which is a euphemism for targeting. (They do allow some researchers access to them.)

Then comes this interesting little tidbit as it is describing how it is building out its computation resources:

In order to deeply understand and model people’s preferences, our recommendation models can have tens of trillions of parameters — orders of magnitude larger than even the biggest language models used today.

I pressed Meta to get a little more specific about these theoretical tens-of-trillions models, and that’s just what they are: theoretical. In a clarifying statement, the company said “We believe our recommendation models have the potential to reach tens of trillions of parameters.” This phrasing is a bit like saying your burgers “can” have 16-ounce patties but then admitting they’re still at the quarter-pounder stage. Nevertheless the company clearly states that it aims to “ensure that these very large models can be trained and deployed efficiently at scale.”

Would a company build costly infrastructure for software it doesn’t intend to create — or use? It seems unlikely, but Meta declined to confirm (though nor did they deny) that they are actively pursuing models of this size. The implications are clear, so while we can’t treat this tens-of-trillions scale model as extant, we can treat it as genuinely aspirational and likely in the works.

“Understand and model people’s preferences,” by the way, must be understood to mean behavior analysis of users. Your actual preferences could probably be represented by a plaintext list a hundred word long. It can be hard to understand, at a fundamental level, why you would need a model this large and complex to handle recommendations even for a couple billion users.

The truth is the problem space is indeed huge: there are billions and billions of pieces of content all with attendant metadata, and no doubt all kinds of complex vectors showing that people who follow Patagonia also tend to donate to the World Wildlife Federation, buy increasingly expensive bird feeders, and so on. So maybe it isn’t so surprising that a model trained on all this data would be quite large. But “orders of magnitude larger” than even the biggest out there, something trained on practically every written work accessible?

There isn’t a reliable parameter count on GPT-4, and leaders in the AI world have also found that it’s a reductive measure of performance, but ChatGPT is at around 175 billion and GPT-4 is believed to be higher than that but lower than the wild 100 trillion claims. Even if Meta is exaggerating a bit, this is still scary big.

Think of it: an AI model as large or larger than any yet created… what goes in one end is every single action you take on Meta’s platforms, what comes out the other is a prediction of what you will do or like next. Kind of creepy, isn’t it?

Of course they’re not the only ones doing this. Tiktok led the charge in algorithmic tracking and recommendation, and has built its social media empire on its addictive feed of “relevant” content meant to keep you scrolling until your eyes hurt. Its competitors are openly envious.

Meta is clearly aiming to blind advertisers with science, both with the stated ambition to create the biggest model on the block, and with passages like the following:

These systems understand people’s behavior preferences utilizing very large-scale attention models, graph neural networks, few-shot learning, and other techniques. Recent key innovations include a novel hierarchical deep neural retrieval architecture, which allowed us to significantly outperform various state-of-the-art baselines without regressing inference latency; and a new ensemble architecture that leverages heterogeneous interaction modules to better model factors relevant to people’s interests.

The above paragraph isn’t meant to impress researchers (they know all this stuff) or users (they don’t understand or care). But put yourself in the shoes of an advertiser who is beginning to question whether their money is well spent on Instagram ads instead of other options. This technical palaver is meant to dazzle them, to convince them that not only is Meta a leader in AI research, but that AI genuinely excels at “understanding” people’s interests and preferences.

In case you doubt it: “more than 20 percent of content in a person’s Facebook and Instagram feeds is now recommended by AI from people, groups, or accounts they don’t follow.” Just what we asked for! So that’s that. AI is working great.

But all this is also a reminder of the hidden apparatus at the heart of Meta, Google, and other companies whose primary motivating principle is to sell ads with increasingly granular and precise targeting. The value and legitimacy of that targeting must be reiterated constantly even as users revolt and advertising multiplies and insinuates rather than improves.

Never once has Meta done something sensible like present me with a list of 10 brands or hobbies and ask which of them I like. They’d rather watch over my shoulder as I skim the web looking for a new raincoat and act like it’s a feat of advanced artificial intelligence when they serve me raincoat ads the next day. It’s not entirely clear the latter approach is superior to the former, or if so, how superior? The entire web has been built up around a collective belief in precision ad targeting and now the latest technology is being deployed to prop it up for a new, more skeptical wave of marketing spend.

Of course you need a model with ten trillion parameters to tell you what people like. How else could you justify the billion dollars you spent training it!

Meta expects recommendation models ‘orders of magnitude’ bigger than GPT-4. Why? by Devin Coldewey originally published on TechCrunch

Source link

Share with your friends!

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security