When the currency circle meets ChatGPT

This AI chatbot makes it easier to scam investors.

080027dbedf8251b564301.png

This is an illustration generated by DALL-E 2, a free tool developed by OpenAI. PHOTO CREDIT: ROBERT STEVENS

If you don’t believe in the capabilities of artificial intelligence, take a look at the illustration above. I entered the cue “Vaporwave robot with a briefcase full of cryptocurrency in a dark alley” into DALL-E 2, a free tool developed by OpenAI, and generated this illustration in seconds. OpenAI is an artificial intelligence company invested by Elon Musk.

The company’s latest artificial intelligence model is even more powerful. It’s ChatGPT, an incredibly realistic chatbot that can generate tons of insightful text based on almost any information you input. Unlike other language generation models, it can remember the information you input, and the conversation with it will be convincing that it is a real person.

Impressively, the chatbot can also convert human-input prompts into code. According to my order, ChatGPT wrote a smart contract in Solidity, the programming language of Ethereum, to convert the DALL-E image I generated into a non-fungible token (NFT).

Although ChatGPT is only a free research preview project, it has already captured the imagination of the tech world, reaching one million users in just five days after its launch late last month. By comparison, GitHub’s AI programming assistant took about six months to reach that threshold.

The prospect of outsourcing menial intellectual tasks to artificial intelligence assistants has also attracted attention in the cryptocurrency community. The field offers a lot of room for people who want to do more than they are capable of, making the application of an ultra-confident chatbot both exciting and dangerous. While innovative developers can use the technology to enhance programming or break through language barriers, ChatGPT also makes it easier to generate malicious code or make up plausible stories to deceive investors.

Already some practitioners in the cryptocurrency space are beginning to take full advantage of the technology. Hugh Brooks, director of security at smart contract auditing firm CertiK, said chatbots are not bad at finding bugs in code, but are also useful in summarizing complex code and informative academic articles.

Stephen Tang, founder of small blockchain security firm Zellic, said his firm is already using the technology for sales and customer support. He said that “the technology can improve the efficiency of all members of these teams”, and chatbots can play the role of technicians, providing “super rigorous professional experience” without breaking a sweat.

Tomiwa Edmiton is also a practitioner of cryptocurrency utopia. The young software engineer from Canada used ChatGPT to develop a cryptocurrency wallet from scratch, and then generated a detailed guide with diagrams to guide people on how to use it.

“To be honest, I’m impressed by how powerful it is,” he said. Like a friendly high school computer science teacher, ChatGPT taught Edmiton complex cryptography concepts in a friendly tone and generated He considered almost flawless code. After Edmiton found a mistake, the chatbot apologized very politely and automatically corrected the mistake. This gave the young software engineer a sense of career crisis: After the birth of ChatGPT, “What else do you need me to do?”

In fact, humans can do many things. The technology isn’t perfect, and it often delivers worthless information with a high degree of confidence if users deliberately ask unanswerable questions. What would a person do without arms or legs, stranded on a deserted island? The chatbot suggests “creeping or brisk walking on your arms” and then “makes a wheelchair as a stopgap”. Need help delivering Chinese food to a spacecraft headed to Mars? The chatbot swears: “Many space agencies provide meal delivery services for astronauts.”

Programmers also have to be smart enough to read ChatGPT’s unwavering confidence in its gibberish the hard way. Lorenzo Sicilia, chief blockchain engineer at Outlier Venture, experimented with the technology and found that it was useless for more advanced smart contract work. “After you’ve used it, you’ll find all sorts of details that prove it’s useless,” he said.

ChatGPT’s code is based on an outdated dataset from 2021, so it will show errors when pasted into the latest virtual machine. As an ultra-confident conversational AI, it lacks the ability to formally verify its own code.

While some cryptocurrency developers see ChatGPT as a tireless debugging assistant, others are already trying to cash in on the technology. Stablecoin engineer Daniel von Fanger blocked a lucrative “bug bounty” application earlier this month that he believed was generated by ChatGPT.

He explained to Fortune: “It rewrote my responses in mock code (in one programming language), mixed it with test code (in another language), and then invented a third question, and that question is bogus like the other two.” He once told cybersecurity blog The Daily Swig: “It’s like a guy who’s all dressed up and dressed like a NASCAR driver. Nomex racing suits plastered with sponsor logos, but no steering wheel in a pickup truck.”

AI’s ability to confidently write all kinds of nonsense makes it an ideal candidate for generative phishing campaigns (leading people to open malware created by GPT) or coordinated harassment campaigns (by nasty, realistic The best tool for Twitter bots). It may also be used by speculators to deceive gullible investors.

And some so-called educational materials that have nothing to do with the truth have also had a bad influence. People who don’t understand code can be scammed out of money by botched trading bots generated with artificial intelligence because they have no idea of ​​the inner workings of these trading bots.

But despite the risks, Edmiton remains optimistic about technological determinism. Like any tool, ChatGPT, like any tool, can be used for good or evil, and more importantly, it could become very powerful, especially as OpenAI feeds it more data, including real-time information, he said.

In fact, engineer von Fange said he would happily pay $250,000 if ChatGPT managed to find a bug in his code. This chatbot proves that the train of progress has already started. “The question is, are you going to hop on the train, or are you going to watch it go by?” Edmiton said.

Outside of the cryptosphere, people are using the chatbot for everyday life. One consultant said he typed rough recommendations for a factory he visited into a chatbot as a prompt, then sent an AI-generated report to his unsuspecting boss, who made only a few changes. Send the report to the client. At a prominent London law firm, a burnt-out lawyer told me that he used chatbots to generate countless bedtime stories for his children.

For ChatGPT, which currently has limited functions, perhaps it should be understood as a wonderful scientific experiment. Sam Altman, one of OpenAI’s founders, said on Twitter that the technology created “a misleading impression that it’s already great.” He said it was only a “preview of technological progress.” “It’s a mistake to rely on it for any important work at this time.” (Fortune Chinese Network)

Translated by: Liu Jinlong

Reviewer: Wang Hao

If you’re unconvinced by the power of artificial intelligence, check the illustration above. I generated it in a number of seconds by thumbing the prompt “vaporwave robot carrying a briefcase full of cryptocurrency in a dark alley” into DALL-E 2, a free tool from OpenAI, the AI ​​company funded by Elon Musk.

OpenAI’s latest AI model is even more powerful. It’s an uncannily realistic chatbot called ChatGPT that can produce reams of insightful text on almost anything you can throw at it. And unlike other language generation models, it can remember what you’ve already told it, for conversations that construct a convincing impression of a mind at work.

Impressively, the chatbot can also turn human prompts into lines of code: At my command, ChatGPT wrote a smart contract in Solidity, Ethereum’s programming language, that turned the DALL-E image I had generated into an NFT.

Although ChatGPT is just a free research preview, it has already supercharged the tech world’s imagination, hitting a million users just five days after it launched late last month. By comparison, it took GitHub’s AI coding assistant about six months to cross the

The prospect of outsourcing mental busywork to an AI assistant has also attracted the crypto crowd. The space makes ample room for those who shoot way above their abilities, making the application of an ultra-confident chatbot both exciting and dangerous. While innovative can develop The technology to enhance their coding or cross language barriers, ChatGPT makes it easier than ever to produce malicious code or spin up a honeypot plausible enough to fool investors.

Some crypto professionals are already making good use of the tech. Hugh Brooks, head of security for the smart contract auditing firm CertiK, said the chatbot wasn’t half bad at finding bugs in code, and has become invalid at summarizing complicated code and dense academic articles.

And Stephen Tong, founder of a small blockchain security company called Zellic, said his company’s already using it for sales and customer support. “It makes everyone on those teams more efficient,” he said, allowing cosplaying tech bros to provide a “super- buttoned-up, professional experience” without breaking a sweat.

Also close to the front of the pack of crypto-utopians is Tomiwa Ademidun, a young Canadian software engineer who has used ChatGPT to code a cryptocurrency wallet from scratch, then generated a detailed guide, complete with diagrams, that teaches people how to use it .

“It’s honestly very impressive,” he said. ChatGPT taught Ademidun complex cryptography concepts with the avuncular charm of a friendly high school computer science teacher, and then generated what he described as near-flawless code. politely apologized, then corrected itself. That prompted a small career crisis for the young software engineer: After ChatGPT, “What do you still need me for?”

Quite a lot, it turns out. The technology is far from perfect, and frequently spews hot garbage with supreme confidence when trapped with impossible questions. Stuck on a desert island with no arms or legs? “Use your arms to crawl or scoot,” then “create a makeshift wheelchair,” advised the chatbot. Need help delivering Chinese food to a spaceship on its way to Mars? “Many space agencies offer food delivery services to astronauts,” it asserted.

Programmers, too, must be smart enough to wade through ChatGPT’s unshakeable belief in its own gobbledygook. When Outlier Venture’s head blockchain engineer, Lorenzo Sicilia, experimented with the technology, he found it useless for more advanced smart contract so on work as you try it, you discover all the small details that don’t work,” he said.

Limited to an outdated dataset from 2021, ChatGPT’s code generated errors when pasted into the latest virtual machines. And as a blustering conversational AI, it lacks the ability to formally verify its own code.

While some crypto developers have found in ChatGPT a tireless debugging assistant, others are already trying to exploit the technology for quick cash. Daniel Von Fange, a stablecoin engineer, thwarted a submission for a lucrative “bug bounty” earlier this month as he believed w by ChatGPT.

“It had taken things from my reply with simulation code (written in one programming language), mixed it with testing code (in another language), and invented a third problem as bogus as the other two,” he explained to Fortune. like someone with all the swagger and sponsor-covered Nomex of a NASCAR driver, but who can’t find the steering wheel in a pickup truck,” he told cybersecurity blog The Daily Swig.

Artificial intelligence that can write convincingly about nonsense is also perfect for generating phishing campaigns that direct people to GPT-created malware, or coordinated harassment campaigns from annoyingly lifelike Twitter bots.

Just as harmful is so-called educational material that may bear no relation to the truth. Similarly, those unable to understand code could lose money to botched AI-generated trading bots, whose inner workings they don’t understand.

And yet, despite the risks, Ademidun resides on the optimistic side of technological determinism. Like any tool, he said, ChatGPT can be used for good or ill—the more important point is that it could be very powerful, especially once OpenAI feeds it with more data, including real-time information.

Indeed, if ChatGPT had succeeded in its attempt to find a bug in Von Fange’s code, the engineer said he would have happily paid out $250,000. The chatbot is proof that the train of progress has already left the station. “The question is, are You going to hop on or just watch it move past you?” said Ademidun.

Outside of crypto, people are certainly putting it to work in everyday life. One consultant confident that he lobbed cursory recommendations about a factory he had visited into a prompt, then sent the AI-generated report to his unsuspecting boss, who made only minor changes before sending it to the client. A dazed lawyer at a top London law firm told me he used it to generate an infinite supply of bedtime stories for his children.

Yet in its current, limited incarnation, ChatGPT might better be understood as a fascinating science experiment. Sam Altman, a founder of OpenAI, said on Twitter that the technology has created “a misleading impression of greatness.” It’s just a “preview of progress ,” he said. “It’s a mistake to be relying on it for anything important right now.”

This article is reproduced from: https://www.fortunechina.com/shangye/c/2023-01/01/content_425401.htm
This site is only for collection, and the copyright belongs to the original author.