Google engineer fired for claiming AI awareness: breach of nondisclosure agreement

Welcome to the WeChat subscription number of “Sina Technology”: techsina

Text/Heart of the Machine

Source: Heart of the Machine (ID: deep_insights)

Let’s end this riddle.

For months, the engineer, who claimed to be “AI conscious,” has been wrangling with Google managers, executives and human resources, and his views have sparked widespread academic debate, but his career at Google has finally come to an end.

Blake Lemoine, an engineer who publicly claimed that Google’s large model LaMDA conversational artificial intelligence has perception capabilities, has been fired, according to Big Technology, a media that continues to track reports.

In June, Google gave Lemoine paid time off for violating a nondisclosure agreement after Lemoine contacted the U.S. Senator’s office about his “AI human rights” concerns and hired a lawyer to represent LaMDA, Google’s AI dialogue system.

Google spokesman Brian Gabriel emailed a statement to The Verge Friday, partially confirming the dismissal, saying: “We wish Lemoine all the best.” The company also said: “LaMDA has passed 11 times. Different reviews, we published a research paper earlier this year detailing its responsible development efforts.” Google insists it reviewed Lemoine’s claims “extensively” and found them “totally unfounded.” .

This is in line with the attitude of many AI experts and ethicists. Many scholars believe that Lemonie’s claims should be impossible given today’s technology. The Google engineer has claimed that after a long conversation with the LaMDA model, he believes that the AI ​​is not just a program, it has its own thoughts and feelings, rather than only producing close to the level of human dialogue capabilities as the original design.

Blake Lemoine served in the U.S. Army. Blake Lemoine served in the U.S. Army.

Lemonie believes that because AI is conscious, Google researchers should get the AI’s own consent before experimenting with LaMDA (Lemoine himself was assigned to test whether the AI ​​produces hate speech), and he posts a lot on his Medium blog account. The content of the conversation with the AI ​​as evidence.

In January of this year, the paper “LaMDA: Language Models for Dialog Applications” co-authored by more than 50 Google researchers gave a complete introduction to the language model LaMDA. The paper demonstrates AI’s dramatic improvements in near-human-level dialogue quality and in terms of security and fact-base. LaMDA is built by fine-tuning a series of dialogue-specific, Transformer-based neural language models with up to 137 billion parameters, and the models can also refer to external knowledge sources as a reference during dialogue.

Google said that pre-trained LaMDA models have been widely used in natural language processing research, including program synthesis, zero-shot learning, style transfer and other directions.

After the Lemoine incident was widely reported by the media, many scholars in the field of AI have expressed their views. Former Tesla AI director Andrej Karpathy believes that the large model shows a conscious appearance and is “extremely thoughtful”, while Gary, a professor at New York University From Marcus’s view, LaMDA must have no sense.

Most AI experts believe the industry is a long way from giving computers cognitive power.

Here’s Google’s full statement, which also mentions Lemoine’s claims that the company didn’t properly investigate him:

Just like the AI ​​standards we share, Google takes AI development very seriously and is always committed to responsible innovation. LaMDA has gone through 11 different reviews, and we published a research paper earlier this year detailing its responsible development efforts. If an employee expresses the same concern as Blake about our work, we review it extensively. We found Blake’s claim that LaMDA was sentient as completely unfounded, and worked with him to clarify it for several months.

These discussions are part of an open culture that helps us innovate responsibly. It is therefore regrettable that, despite lengthy discussions on this topic, Blake has chosen to continue to violate clear employment and data security policies, which include the need to protect product information. We will continue to develop language models carefully and wish Blake all the best.

This article is reproduced from: http://finance.sina.com.cn/tech/csj/2022-07-23/doc-imizirav5076721.shtml
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment