Original link: https://www.williamlong.info/archives/7139.html
According to news on April 18, according to reports, the latest report of the Israeli cybersecurity company Team8 shows that companies using generative artificial intelligence such as ChatGPT may leak customer information and commercial secrets.
The widespread adoption of new artificial intelligence chatbots and collaboration tools could expose some companies to data breaches and legal risks, the report said. They worry that hackers may use chatbots to obtain sensitive corporate information or launch attacks on businesses. In addition, the confidential information currently fed to chatbots may also be used by artificial intelligence companies in the future.
Big technology companies including Microsoft and Alphabet are scrambling to improve chatbots and search engines with generative artificial intelligence techniques, using data obtained from the Internet to train their models to provide users with a one-stop question-and-answer service. If the tools are fed with classified or private data, it will be difficult to remove the information in the future, the report said.
The report reads: “Sensitive information, intellectual property, source code, trade secrets and other data of enterprises using generative artificial intelligence may be obtained and processed by others through channels such as direct user input or API, including customer information. or private and confidential information.” Team8 marked this risk as “high risk”. They believe the risk is “manageable” if appropriate precautions are taken.
Contrary to Team8’s emphasis in the report that the chatbot’s queries are not fed into the Oracle model to train the AI, recent reports claim that such prompts may be seen by others.
“As of press time, the large language model cannot update itself in real time, so it cannot feed back the information input by one person to another person, thus effectively dispelling this concern. However, when training future versions of the large language model, it may not be taken. This approach,” the report reads.
The report also flagged three other “high risk” issues related to the integration of generative artificial intelligence tools, and highlighted the threat posed by the sharing of more and more information through third-party applications. Microsoft has embedded some of its AI chatbot capabilities into Bing search and Office 365 tools.
“For example, on the user side, third-party applications leverage generative artificial intelligence APIs that, if compromised, could provide email and web browser permissions, and attackers could take actions impersonating the user,” the report said.
The company believes that the use of artificial intelligence also has the potential to exacerbate discrimination problems, damage the company’s reputation or lead to infringement lawsuits. These are all “medium risk” issues.
Microsoft corporate vice president Ann Johnson (Ann Johnson) co-authored the report. Microsoft invested billions of dollars in OpenAI, the developer of ChatGPT.
“Microsoft encourages transparent discussions about cyber risks in the security and artificial intelligence domains,” a Microsoft spokesperson said.
The popularity of the ChatGPT chatbot developed by OpenAI has sparked calls for regulation. There are concerns that the innovation could also be used for nefarious purposes as companies try to use the technology to increase efficiency.
U.S. Federal Trade Commission (FTC) officials said on Tuesday that the agency will focus on companies that misuse artificial intelligence technology to violate anti-discrimination laws or engage in deceptive practices.
FTC Chairman Lina Khan and commissioners Rebecca Slaughter and Alvaro Bedoya attended a hearing in Congress where they were asked about their relationship with the Concerns about recent AI innovations. The technology could be used to create high-quality “deep fakes,” which could lead to more deceptive scams, as well as other illegal activities.
Bedoya said businesses using algorithms or artificial intelligence must not violate civil rights laws or act unfairly or deceptively.
Black boxes, he said, cannot be an excuse for not being able to explain algorithms.
Khan also believes that the latest artificial intelligence technology can help criminals carry out fraudulent activities. She also said that any wrongdoing “should be sanctioned by the FTC.”
Slaughter noted that the FTC has had to adapt to changing technology throughout its 100-year history, and adapting to ChatGPT and other AI tools is no different.
Manuscript source: Sina.com
This article is transferred from: https://www.williamlong.info/archives/7139.html
This site is only for collection, and the copyright belongs to the original author.