Since AI eliminates jobs, it is a way to keep people afloat financially (this is not universal basic income)

City says 7,000 summer jobs are available for Boston youth ages 14 to 18

In Silicon Valley, some of the brightest minds believe that a universal basic income (UBI) that guarantees people unfettered cash payments will help them survive and thrive as advanced technologies eliminate more jobs as we know it, from white-collar and creative roles—lawyers, journalists, artists, software engineers— to work functions. The idea gained enough traction dozens Guaranteed income programs have started in US cities since 2020.

However, even Sam Altman, CEO of OpenAI and one of the bigwigs Supporters From UBI, it is not believed to be a complete solution. As he said during a Sit Earlier this year, “I think it’s a small part of the solution. I think it’s cool. I think so.” [advanced artificial intelligence] More and more involved in the economy, we must distribute wealth and resources more than we have, and that will be important over time. But I don’t think this will solve the problem. I don’t think that will give people meaning, I don’t think it means that people will completely stop trying to create and do new things and anything else. So I would consider it an enabling technology, but not a plan for society.”

The question is what society’s plan might look like in this case, computer scientist Jaron Lanier, founder of the field of virtual reality, wrote this week. The New Yorker That “data dignity” could be one solution, if not the case the Answer.

Here’s the basic premise: Right now, we’re mostly giving our data away in exchange for free services. Lanier argues that it will become more important than ever that we stop doing this, and that the “digital things” we rely on – partly social networks but also increasingly artificial intelligence models like OpenAI’s GPT-4 – instead “be connected to humans”. who give them a lot to swallow in the first place.

The idea is for people to get paid for what they make, even when it’s filtered and repackaged by the big models.

The concept isn’t entirely new, as Lanier first introduced the concept of data dignity in a 2018 Harvard Business Review article titled “Blueprint for a better digital society. As he wrote at the time with co-author and economist Glenn Weyl,[R]Projections from the tech sector point to a coming wave of underemployment due to artificial intelligence (AI) and automation” and “a future in which people are increasingly treated as worthless and devoid of economic agency.”

But the UBI advocates’ “rhetoric” leaves room for only two conclusions,” which are very extreme, Lanier and Weil note. “Either there will be mass poverty despite technological advances, or much of the wealth will have to be brought under central and national control through the Social Wealth Fund to provide citizens with a universal basic income.”

But both wrote, “Power is overfocused and undermines or ignores the value of data creators.”

Untie my mind

Of course, allocating the right amount of credit to people for their myriad contributions to all that is in the world is no small challenge (even as one might imagine AI audit startups promising to tackle this problem). Lanier acknowledges that even data dignity researchers can’t agree on how to separate out everything the AI ​​models internalized or how to try to do a detailed accounting.

But he thinks – perhaps optimistically – that it can be done gradually. “The system will not necessarily take into account the billions of people who have made ambient contributions to the large models — those who have added to a competency simulation model with rules, for example. [It] It may only attend for a small number of private shareholders who appear in a particular position. However, over time, “more people may be included, as intermediary rights organizations — trade unions, trade unions, professional groups, etc. — begin to play a role.”

Of course, the most immediate challenge is the black-box nature of current AI tools, says Lanier, who believes that “systems need to be more transparent. We need to get better at defining what’s going on inside them and why.”

While OpenAI had released at least some of its training data in previous years, it has since shut down Kimono completely. Indeed, Greg Brockman he told TechCrunch last month GPT-4, the newest and most powerful big language model to date, said that its training data came from “a variety of licensed, established, and publicly available data sources, which may include personal, publicly available information,” but declined to provide anything more specific.

In the name of OpenAI advertiser When released in GPT-4, there are a lot more downsides to the costume revealing than they do. “Due to both the competitive landscape and the safety implications of large-scale models such as GPT-4, this report does not contain further details on architecture (including model size), hardware, training computation, data set generation, training method, or the like. .”

The same is true for every major language paradigm today. For example, the Google Bard chatbot is based on the LaMDA language model, which is trained on datasets based on Internet content called Infiniset. but Little is known On this subject other than the Google search team books a year ago, which is that – at some point in the past – it listed 2.97 billion documents and 1.12 billion dialogues with 13.39 billion words.

Regulators are grappling with what to do. OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of a growing number of countries, including the Italian authority, which have banned the use of ChatGPT. French, German, Irish and Canadian data regulators are also investigating how data is collected and used.

But as Margaret Mitchell, an AI researcher who was previously the co-leader of AI ethics at Google, told the outlet, Technology reviewAt this point, it may be nearly impossible for these companies to identify and remove individuals’ data from their forms.

As the outlet explained: OpenAI could have saved itself a massive headache by building a robust data-logging system from the start, [according to Mitchell]. Instead, it is common in the AI ​​industry to build datasets for AI models by randomly deleting the web and then outsourcing the work of removing duplicates or irrelevant data points, filtering out unwanted objects, and fixing typos.”

How to save a life

The fact that these tech companies may already have such a limited understanding of what is now in their models is a clear challenge to the “dignity of data” proposal from Lanier, who calls Altman a “colleague and friend” in his New Yorker article.

Whether that made it impossible was something only time would tell.

Certainly, there is merit in wanting to give people ownership of their work, and frustration over this issue can certainly grow as more of the world is reshaped with these new tools.

Whether or not OpenAI and others have the right to scrape the entire internet to feed their algorithms is really at the heart of it. several And on a large scale Copyright infringement claims against them.

But so-called data dignity can also go a long way toward preserving human sanity over time, Lanier points out in his fascinating New Yorker article.

As he sees it, a universal basic income “is tantamount to putting everyone on the dole in order to preserve the black box idea of ​​AI.” Meanwhile, ending the “black-box nature of current AI models” would make people’s contributions easier to count — making them more likely to keep making contributions.

More importantly, Lanier adds, it can also help “create a new creative category rather than a new affiliate category.” And what would you like to be a part of?