Since AI eliminates jobs, it is a way to keep people afloat financially (this is not universal basic income)

City says 7,000 summer jobs are available for Boston youth ages 14 to 18

Image credits: StefanoVicegur / Getty Images

In Silicon Valley, some of the brightest minds believe that a universal basic income (UBI) that guarantees people unfettered cash payments will help them survive and thrive as advanced technologies eliminate more jobs as we know it, from white-collar and creative roles—lawyers, journalists, artists, software engineers – to work functions. The idea gained enough traction dozens Guaranteed income programs have started in US cities since 2020.

However, even Sam Altman, CEO of OpenAI and one of the bigwigs Supporters From UBI, it is not believed to be a complete solution. As he said during a Sit Earlier this year, “I think it’s a small part of the solution. I think that’s cool. I think that [advanced artificial intelligence] More and more involved in the economy, we must distribute wealth and resources more than we have, and that will be important over time. But I don’t think this will solve the problem. I don’t think that will give people meaning, I don’t think it means that people will completely stop trying to create and do new things and anything else. So I would consider it an enabling technology, but not a plan for society.”

The question is what society’s plan might look like in this case, computer scientist Jaron Lanier, founder of the field of virtual reality, wrote this week. The New Yorker That “data dignity” could be one solution, if not the case the Answer.

Here’s the basic premise: Right now, we’re mostly giving our data away in exchange for free services. Lanier argues that it will become more important than ever that we stop doing this, and that the “digital things” we rely on — partly social networks but also increasingly artificial intelligence models like OpenAI’s GPT-4 — instead “be connected to the humans” who They give them a lot to swallow in the first place.

The idea is that “people get paid for what they make, even when it’s filtered and recombined by the big models.”

The concept isn’t entirely new, as Lanier first introduced the concept of data dignity in a 2018 Harvard Business Review article titled “Blueprint for a better digital society. As he wrote at the time with co-author and economist Glenn Weyl,[R]Projections from the tech sector point to a coming wave of underemployment due to artificial intelligence (AI) and automation” and “a future in which people are increasingly treated as worthless and devoid of economic agency.”

But they note that the “rhetoric” of UBI advocates “leaves room for only two conclusions,” which are extremes. “Either there will be mass poverty despite technological advances, or much wealth will have to be brought under central and national control through the Social Wealth Fund to provide citizens with a universal basic income.”

But both wrote, “Power is overfocused and undermines or ignores the value of data creators.”

Of course, allocating the right amount of credit to people for their myriad contributions to all that is in the world is no small challenge (even as one might imagine AI audit startups promising to tackle this problem). Lanier acknowledges that even data dignity researchers can’t agree on how to separate out everything the AI ​​models internalized or how to try to do a detailed accounting.

But he thinks – perhaps optimistically – that it can be done gradually. “The system will not necessarily take into account the billions of people who have made ambient contributions to the large models — those who have added to a competency simulation model with rules, for example. [It] It may attend only for a small number of private shareholders who appear in a particular case.” However, over time, “more people may be included, as intermediate rights organizations—guilds, trade unions, professional groups, etc.—begin to play a role.”

Of course, the most immediate challenge is the black-box nature of current AI tools, says Lanier, who believes that “systems need to be more transparent. We need to get better at saying what’s going on inside them and why.”

While OpenAI had released at least some of its training data in previous years, it has since shut down Kimono completely. In fact, last month Greg Brockman told TechCrunch about GPT-4, the newest and most powerful big language model yet, that its training data came from “a variety of licensed, established, and publicly available data sources, which may include personal, publicly available data sources.” ’, but declined to provide anything more specific.

In the name of OpenAI advertiser When releasing GPT-4, there are a lot of downsides to reveal. “Due to both the competitive landscape and the safety implications of large-scale models such as GPT-4, this report does not contain further details on architecture (including model size), hardware, training computation, data set generation, training method, or the like. .”

The same is true for every major language paradigm today. For example, the Google Bard chatbot is based on the LaMDA language model, which is trained on datasets based on Internet content called Infiniset, Little is knownalthough a year ago, the Google search team books It incorporated 2.97 billion documents and 1.12 billion dialogues with 13.39 billion words.

OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of regulators for its aversion to more transparency. Italian authorities have blocked the use of ChatGPT, and French, German, Irish and Canadian data regulators are also investigating how the data was collected and used.

But as Margaret Mitchell, AI researcher and chief ethicist at startup Hugging Face, and previously co-lead of AI ethics at Google, Technology Review saysit can be nearly impossible at this point to identify and remove individuals’ data from their models.

As the outlet explained: “The company could have saved itself a major headache by building a robust data record-keeping system from the start,” she says. Instead, it’s common in the AI ​​industry to build datasets for AI models by randomly scraping the web And then outsource the work of removing duplicates or irrelevant data points, filtering out unwanted stuff, and fixing typos. These methods, and the sheer size of the data set, mean that tech companies tend to have a very limited understanding of what goes into training their models.” .

This is a clear challenge to Lanier’s proposal, who described Altman as a “colleague and friend” in his New Yorker article.

Whether that made it impossible was something only time would tell.

Certainly, there is merit in wanting to give people ownership of their work; Whether or not OpenAI and others have the right to scrape the entire internet to feed their algorithms is already at the heart of the matter. several And on a large scale Copyright infringement claims against them.

This so-called dignity of data can also go a long way toward preserving human sanity over time, Lanier points out in his fascinating New Yorker piece.

While a universal basic income “is tantamount to paying everyone a subsidy in order to keep the idea of ​​AI in a black box,” ending “the black-box nature of our current AI models” would make it easier to calculate people’s contributions, making them more accessible. . You will likely continue to make contributions.

Importantly, Lanier adds, it can also help “create a new creative category rather than a new affiliate category.”