Original link: https://www.williamlong.info/archives/7146.html
In recent years, Jaron Lanier, an American computer scientist, visual artist, computer philosophy writer, and futurist, and Glen Weyl, an economist at Microsoft’s New England Research Institute, proposed “data dignity”. “Concept, emphasizing that individuals have the right to control and manage their own data to ensure data security, privacy, and protect it from misuse or unauthorized access.
On April 20th, Lanier published an article titled “There Is No AI” in “The New Yorker”, proposing that the deification of artificial intelligence should be stopped, and it should be used as an innovative form of social collaboration Come and see. He objected to the recent joint letter calling for an end to training more advanced artificial intelligence, and reiterated the concept of “data dignity”: ending the black box of artificial intelligence, recording the source of bits, “People can get paid for the things they create, even if these things are obtained through artificial intelligence. Large Model Filtering and Recombination”, “When a large model provides valuable output, a data-dignity approach will track down the most unique and influential contributors.”
According to Lanier, the successful introduction of each new application of artificial intelligence or robotics may involve the initiation of a new kind of creative work. This can help ease the transition to an economy that incorporates large models, large and small.
Jalen Lanier is considered a pioneer in the field of virtual reality, and in 2014 he was named one of the world’s top 50 thinkers by Prospect magazine. In 2018, he was named by Wired as one of the 25 most influential people in the history of technology in the past 25 years. The following is the translation of the above-mentioned article in The New Yorker, which has been slightly edited for the convenience of reading and understanding.
Jalen Lanier left Atari in 1985 to found VPL Research, the first company to sell VR glasses and wired gloves. He started working at Microsoft in 2006 and has been working at Microsoft Research since 2009 as an interdisciplinary scientist. As a computer scientist, I don’t like the term “artificial intelligence”. In fact, I think it’s misleading — maybe even a little dangerous. Everyone is already using the term, and it might seem a bit late to argue about it. But we are at the dawn of an era of new technology — and misunderstandings can easily lead to misleading.
The term “artificial intelligence” has a long history — it was coined in the early days of computing in the 1950s. More recently, computer scientists have grown up with characters like Commander Data in films like The Terminator and The Matrix, and Star Trek: The Next Generation. These cultural touchstones have become an almost religious myth in tech culture. It is only natural that computer scientists aspire to create artificial intelligence and realize a long-held dream.
But shockingly, many who pursue the dream of AI also worry that it could spell the end of humanity. It’s widely believed that even the scientists at the center of today’s work believe that what AI researchers are doing could lead to the end of our species, or at least cause great harm to humans, and very quickly. In a recent poll, half of AI scientists agreed that there is at least a 10% chance that humanity will be wiped out by AI. Even my colleague Sam Altman, who runs OpenAI, made similar comments. Walk into any Silicon Valley coffee shop and you’ll hear the same argument: one says new code is just code and that everything is under human control, but another argues that anyone who holds that view is just There is no understanding of the depth of the new technology. These debates are not entirely rational: When I asked my most frightened scientist friends to name possible scenarios for the AI apocalypse, they said: “Accelerating progress will fly by us, and we won’t be able to imagine what’s happening.” thing.”
I disagree with this way of speaking. Many of my friends and peers have been impressed by their experience with the latest big models, such as GPT-4, and are on vigil for deeper intelligence to emerge. My position is not that they are wrong, but that we cannot be sure; we reserve the option to classify software differently.
The most pragmatic stance is to view AI as a tool, not a living thing. My attitude does not remove the possibility of danger: no matter how we think about it, we can still design and operate new technologies poorly in ways that harm us and even lead to our extinction. Deifying technology is more likely to prevent us from operating it well, and this kind of thinking limits our imagination, tying it to yesterday’s dreams. We can work better under the assumption that there is no such thing as artificial intelligence, and the sooner we understand this, the sooner we can start intelligently managing new technologies.
If the new technology isn’t true artificial intelligence, what is it? In my opinion, the most accurate way to understand what we are building today is as an innovative form of social collaboration.
Programs like OpenAI’s GPT-4, which write out sentences sequentially, like a version of Wikipedia, include more data, mashed together statistically. The program that creates pictures in order is like a version of an online picture search, but with a system for combining pictures. In both cases, humans wrote the text and provided the images. These new programs do human work the way the human brain does. The innovation is that the mash-up process becomes guided and constrained, so that the results are usable and often compelling. It’s an important achievement, and one worth celebrating—but it can be seen as illuminating a once-hidden coherence among human creations, rather than inventing a new idea.
As far as I know, my point is to glorify technology. After all, what is civilization but social cooperation? Thinking of AI as a way to cooperate, rather than as a technology for creating independent, intelligent beings, might make it less mysterious, unlike HAL 9000 (the robot in 2001: A Space Odyssey) or Data Command officer like that. But that’s a good thing, because the mystique only makes mismanagement more likely.
It’s easy to relegate intelligence to new systems with a level of flexibility and unpredictability we don’t usually associate with computer technology. But this flexibility arises from simple mathematics. Large language models, like GPT-4, contain a cumulative record of how specific words coincide across vast amounts of text the program has processed. This huge table makes the system inherently close to many grammatical patterns, as well as various aspects of so-called authorial style. When you enter a query that consists of certain words in a certain order, your input is associated with what is in the model. Due to the complexity of correlating billions of entries, the results may vary slightly each time.
The non-repetitive nature of the process can make it feel alive. And in a sense, it could make the new system more human-centric. When you synthesize a new image with an AI tool, you might be presented with a bunch of similar options and then have to choose from them; if you were a student cheating with an LLM (Large Language Model), you might read Generated by a Model options and select one. A technique for generating unique content requires a bit of human choice.
Many of the uses of artificial intelligence that I like are the advantages that computers give us when they are less rigid. There’s a brittleness to the digital stuff that forces people to go along with it without evaluating it first. Conforming to the demands of digital design creates an expectation of human compliance. One positive aspect of AI is that if we put it to good use, it could mean the end of this torment. We can now imagine a website that reformulates its scheme for colorblindness, or a website that tailors its scheme to a person’s particular cognitive abilities and style. Humanists like me want people to have more control and not be overly influenced or guided by technology. Flexibility allows us to regain some agency.
Yet despite these possible benefits, it’s also perfectly reasonable to worry that new technology will drive us away in ways we don’t like or understand. Recently, some friends of mine circulated a petition calling for a moratorium on the most ambitious AI developments. The idea is that during the pause, we’re going to look at policy. The petition got signatures from some in our circle, but not from others. I find the concept too vague – what level of progress means the pause can end? Every week, I receive vaguely new mission statements from organizations seeking to start the process of developing AI policy.
These efforts are well-intentioned, but hopeless in my view. Over the years I’ve worked on EU privacy policy, and I’ve come to realize that we don’t know what privacy is. It’s a term we use every day that makes sense in context, but we can’t pin it down well enough to generalize. The closest definition we have to privacy may be the “right to be alone,” but that seems quaint in an age of our constant reliance on digital services. In the context of artificial intelligence, “the right not to be manipulated by computers” certainly seems to hold true, but doesn’t quite say everything we want.
AI policy conversations are “consistent” (does AI “want” the same things humans want?), “safe” (can we foresee guardrails to stop bad AI?), “fair” (can we prevent Can a program be unfriendly to some people?) Such terms rule. Of course the circle has gained a lot by pursuing these ideas, but that hasn’t allayed our fears.
Recently, I called my peers and asked if there was anything they could agree on. I found that there is a basis for agreement. We all seem to agree that deepfakes — images, videos, etc. that are fake but look real — should be flagged by their creators. Communications from virtual humans, as well as automated interactions designed to manipulate human thinking or actions, should also be labeled. People should understand what they’re seeing, and they should have reasonable choices in return.
How can all this be done? I have found that there is almost unanimous agreement that the black-box nature of current AI tools must end. These systems must become more transparent. We need to get better at saying what’s happening within the system and why. It’s not easy. The problem is, we’re talking about large-scale AI systems that aren’t made out of well-defined ideas. There is no clear articulation of what the system “wants”, and it has no label when it does a specific thing, such as manipulating a person. There’s just one giant ocean of jelly – a colossal mathematical amalgam. A writers rights group has proposed that when tools like GPT are used to create scripts, real human writers should be paid in full; after all, the system is borrowing scripts from real people. But when we use artificial intelligence to produce movie segments, maybe even entire movies, there won’t necessarily be a screenwriting phase. A movie is made and may appear to have a script, soundtrack, etc., but it will be calculated as a whole. Trying to open a black box by having the system spit out unnecessary items like scripts, sketches, or intents would involve building another black box to explain the first—an infinite regression.
At the same time, that’s not to say that the interior of a large model is necessarily an untouched wilderness. At some point in the past, a real person created an illustration that was fed into the model as data, and with contributions from others, this became a fresh image. Large-scale AI is made of people, and the way to open the black boxes is to reveal them.
A concept I helped to develop is often referred to as “data dignity”. Long before the rise of the big model “artificial intelligence,” people give away their data for free in exchange for free services like internet search or social networking. This familiar arrangement proved to have a dark side: due to “network effects,” a handful of platforms took over, weeding out smaller players such as local newspapers. Worse, since the direct online experience is free, the only business left is peddling influence. What users experience appears to be a collectivist paradise, but they are targeted by stealthy, addictive algorithms that make people vain, irritable, and paranoid.
In a world of data dignity, something digital is often associated with those who want to be known for making it. In some version of this idea, people could get paid for the things they create, even if those things are filtered and recombined through big models, and tech hubs would make money for facilitating things people want to do. Some people are horrified by the idea of online capitalism, but this would be a more honest capitalism. The familiar “free” arrangement has been a disaster.
One of the reasons the tech world fears that artificial intelligence could become an existential threat is that it could be used to outplay humans, as has been the case with previous waves of digital technology. Given the power and potential impact of these new systems, fears of possible extinction are not unreasonable. As this danger is widely recognized, the arrival of large-scale AI may be an opportunity to reform for the betterment of the tech industry.
Implementing data dignity will require technical research and policy innovation. In that sense, the subject excites me as a scientist. Opening up the black box only makes the model more interesting. And it might help us learn more about language, a truly impressive human invention that we’re still exploring after all these hundreds of thousands of years.
Could data dignity address the oft-expressed economic concerns about artificial intelligence? The main concern is that workers will be degraded or replaced. In public, technologists sometimes say that people working in AI will be more productive in the coming years and will find new types of jobs in a more productive economy. (e.g., might be a cue engineer for an AI program – someone who works with or controls an AI) However, in private, the same people often say, “No, AI will go beyond this idea of cooperation “. Accountants, radiologists, truck drivers, writers, film directors or musicians would never make any money today.
When a large model provides valuable output, a data-dignity approach will track down the most unique and influential contributors. For example, if you ask a model to make an animated film: My Children’s Adventures in the Painted World, with talking cats. Then the pivotal oil painters, cat portraitists, voice actors and writers—or their legacies—may be counted as uniquely important to new creations. They will be recognized, motivated, and possibly even paid.
At first, data dignity may only focus on a few special contributors that arise in a given situation. More people are likely to be included over time, though, as the power organizations in the middle—trade unions, guilds, professional bodies, etc.—come into play. People in data dignity circles sometimes refer to these groups as mediators of personal data (MIDs) or data trusts. People need the power of collective bargaining in order to have value in an online world—especially when they might get lost in giant AI models. When people share responsibilities in a group, they police themselves, reducing the need or temptation for government and corporate censorship or control. Acknowledging the human nature of the Big Model could lead to the flowering of positive new social institutions.
Data dignity is not just for white-collar roles. Consider what would happen if AI-powered tree-pruning robots were introduced. People who trim trees may find themselves devalued and even lose their jobs. But robots may eventually use a new type of landscaping art. Some workers may invent creative methods, such as holographic patterns that look different from different angles, that go into models for pruning trees. With data dignity, these models may create new revenue streams to be distributed through collective organizations. Tree pruning will become more functional and fun over time; there will be a community incentivized to give value. The successful introduction of each new application of artificial intelligence or robotics may involve the initiation of a new kind of creative work. This can help ease the transition to an economy that integrates large models, large and small.
Many in Silicon Valley see UBI as a solution to the underlying economic problems caused by AI, but UBI amounts to putting everyone on the dole in order to preserve the idea of black-box AI. I think it’s a horrible idea, partly because bad actors would want to seize the center of power in a welfare system for all. I doubt that data dignity will ever grow enough to support society as a whole, but I also doubt that any social or economic principle will ever be complete. Whenever possible, the goal should be to at least create a new creation class rather than a new dependency class.
A model is only as good as its inputs. Only through systems like Data Dignity can we extend models to new domains. It is now much easier to get a large language model to write an article than to have a program generate an interactive virtual world, because there are so few virtual worlds out there. Why not solve this problem by giving those who develop more virtual worlds a chance to gain prestige and income?
Can Data Dignity Help Solve Any Kind of Human Extinction? A large model can render us incompetent, or confuse us so much that society collectively goes mad; a powerful, malicious person can use artificial intelligence to do great harm to us all; some also believe that the model itself can “jailbreak” ’, take control of our machines or weapons, and use them against us.
We can find precedents for some of these scenarios not only in science fiction, but also in more general market and technological failures. An example is the 2019 crash of a Boeing 737 MAX aircraft. The plane has flight path correction features that, in some cases, antagonized the pilots, leading to two crashes with mass casualties. The problem isn’t the technology in isolation, but the way it’s integrated into sales cycles, training sessions, user interfaces, and documentation. Pilots think they are right in trying to counteract the system in certain situations, but they are doing exactly the wrong thing, and they have no way of knowing. Boeing failed to clearly communicate how the technology would work, and the resulting confusion led to disaster.
Any engineering design—cars, bridges, buildings—has the potential to injure people, yet we have built a civilization on engineering. It is by raising and expanding human awareness, responsibility, and participation that we can make automation safe; conversely, it is difficult for us to be good engineers if we treat our inventions as mysterious objects. It’s more actionable to think of AI as a form of social collaboration: it gives us access to computer rooms, which are made up of people.
Let’s consider a doomsday scenario where AI derails our society. One way this could happen is through deepfakes. Suppose an evil person, perhaps working for a hostile government at war, decides to incite mass panic by sending everyone a compelling video of someone we love being tortured or kidnapped. (In many cases, the data needed to make such videos is readily available through social media or other sources). Chaos would ensue, even if it didn’t take long for the videos to be found to be fake. How can we prevent this from happening? The answer is obvious: make sure digital information has context.
The original design of the network did not record the origin of the bits, probably to make it easier for the network to evolve rapidly. (Computers and bandwidth were bad to begin with.) Why didn’t we start recording when remembering where bits came from (or close to) became more feasible? In my opinion, we always want the web to be more mysterious than it needs to be. Whatever the reason, the web was designed to remember everything while forgetting the source.
Today, most people take it for granted that the web, and the internet it built, is by its nature anti-contextual and without provenance. We argue that decontextualization is inherent in the concept of digital networking itself. However, it is not. The original proposals for the architecture of digital networks by the immortal scientist Vannevar Bush in 1945 and the computer scientist Ted Nelson in 1960 preserve provenance. Now, AI is revealing the true cost of ignoring this approach. Without provenance, we have no way to control our AIs or make them economically fair. And that risks pushing our society to the edge.
If a chatbot behaves manipulatively, meanly, eccentrically or deceitfully, what answers do we want when we ask why? Revealing where the robot came from when it learned its behavior would provide an explanation: we would learn that it borrowed from a particular novel, or a soap opera. We can react differently to this output and adjust the input to the model to improve it. Why not provide this type of explanation all the time? In some cases, attribution may not be revealed so that privacy is a priority, but attribution is often better for individuals and society than an exclusive commitment to privacy.
The technical challenges of data dignity are real and must inspire serious scientific ambition. Policy challenges will also be substantial. But we need to change our mindset and embrace the hard work of transformation. If we stick to past ideas—including the fascination with the independent possibility of artificial intelligence—we may use new technologies in ways that make the world a worse place. If human beings are to be served, socially, economically, culturally, technologically or in any other sphere of activity, it is only because we have decided that human beings have a special status to be served.
This is my plea to all my colleagues. think about people. People are the answer to bit problems.
Source: The Paper
This article is transferred from: https://www.williamlong.info/archives/7146.html
This site is only for collection, and the copyright belongs to the original author.