Chat with ChatGPT (1)

February 07 20:57~22:15


▲ChatGPT official website home screen


ChatGPT has attracted a lot of attention since it was launched in November last year. Originally, Teddy had no idea about this topic, but last Friday, Mr. Zheng (Teddy’s supervisor) shared his experience of using ChatGPT at the meeting, which aroused Teddy’s attention. Interest in ChatGPT.

Teddy doesn’t understand AI at all, and he doesn’t understand the working principle behind ChatGPT. Purely as a software developer, from the perspective of software design and learning, share Teddy’s experience of using ChatGPT.


help me write the contract

Recently, Teddy and the ezKanban team have been researching how to replace traditional acceptance testing and unit testing with acceptance testing and Design By Contract (DBC) to ensure software quality, so Teddy wanted to test ChatGPT’s ability to write contracts.

Teddy asked ChatGPT to generate a push method contract for the Stack class, and the results are as follows. People who don’t understand DBC can’t judge whether the output result is correct (nonsense XD). Those who know a little about DBC may feel that the contracts generated by ChatGPT are not far from the reference answers in general textbooks, and there is at least a stack not full precondition.



But Teddy could tell at a glance that the above program lacked a very important postcondition, so Teddy continued to ask: “The postconditions seem to be incomplete.” Unexpectedly, ChatGPT added a new postcondition:

@post For all i in [0, list.size() – 2], list.get(i).equals(old(list.get(i)))

This postcondition wants to express “except for the top element of the stack, the contents of the stack before and after the push must be equal to other elements.” In other words, push can only put new elements on the top of the stack, and cannot move other elements inside the original stack. .



Seeing this, do the villagers think ChatGPT is very powerful? But please look carefully at the code it generates, there are actually bugs. What bug, Teddy gave it another hint: “list.get(i).equals(list.get(i)) can’t be sure that the existing elements have not been changed, you must save the old list” .

This time, the program generated by ChatGPT “looks” complete.


But Teddy also finds it very strange. Obviously ChatGPT knows the “complete answer” of the push contract, but why did it only give a common but incomplete answer for the first time? Is it to save computing resources? I don’t know if there are other reasons


The teacher is talking, are you listening?

Just when Teddy thought that ChatGPT had “learned” how to answer this question, Teddy asked again, and emphasized that “The generated contracts should be as accurate and correct as possible.”, reminding it to forget the third postcondition.

However, ChatGPT is still the same, and still answers the answer with only two contracts.



Teddy still kindly asked “Is the postcondition correct?” But this time ChatGPT actually answered Yes. Before Teddy asked: “The postconditions seem to be incomplete”, ChatGPT made up the third contract, but asked it “Is the postcondition correct?” Maybe it thought the two postconditions were correct, so it answered Yes.




If you already have answers to the questions you asked about ChatGPT, and you can tell the correctness of the results generated by ChatGPT, then it is a good tool to quickly replace human-generated text. But if you don’t know the answer to the question, it is very dangerous to blindly trust the output of ChatGPT. Because it has only just come out, the “confidence index” in the results it produces is still a mystery. If you learn a piece of knowledge from textbooks, basically the confidence index is very high, and you can continue to accumulate other larger knowledge systems based on this kind of knowledge. But at this stage, ChatGPT should still be unable to do this.

From Teddy’s point of view, after talking with ChatGPT to get the correct answer, it is natural to expect the same question to be asked again to get the result of the last conversation, but ChatGPT did not meet this expectation. This is also another disappointing place, because it was originally intended to be “trained” as its own little helper, but it forgot after training, so it is impossible to accumulate the user’s personal knowledge on ChatGPT.

Teddy feels that on average, ChatGPT’s ability to understand the problem and the amount of knowledge may have surpassed individual humans. As long as you improve the correctness of ChatGPT output results, and ask it to “know what you know, don’t know what you don’t know”, don’t pretend to understand, so that the usability will be great, and it is worth relying on it to solve specific small problems question.


Yuzo’s inner monologue: trained in nonsense.

This article is transferred from
This site is only for collection, and the copyright belongs to the original author.