A hundred years from now, what languages ​​will people use to develop software? (Book at the end of the article)

It’s hard to predict what human life will be like a hundred years from now, and only a few things are certain. At that time, cars will have the ability to fly at low altitudes, regulations on urban planning will be relaxed, buildings can be built to hundreds of floors, the sun will not be visible on the streets all day long, and women will all learn self-defense.

This article only wants to discuss one of the details: in a hundred years, what language will people use to develop software?

Why is this question worth thinking about? The reason is not that we will eventually use these languages, but that with luck we will be able to use these languages ​​from now on.

“The evolution of Java has come to an end, and I don’t want to make the wrong bet”

In my opinion, programming languages, like biological species, have an evolutionary thread, and many branches will eventually become evolutionary dead ends. This phenomenon has already happened. The Cobol language was once popular, but it seems that no subsequent language has inherited its ideas.

I predict Java will do the same. Someone wrote, “How can you say Java won’t succeed? It has succeeded.” I think it depends on what your success criteria are. If the criterion is the number of relevant books published, or the number of college students who believe that learning Java will lead to a job, then Java has indeed succeeded.

When I say Java won’t make it, I mean it, like Cobol, has come to an end.

CSDN paid download from Visual China

This is just my guess and may not be correct. The point here is not to look down on Java, but to propose that there is an evolutionary context for programming languages, so as to guide readers to think, where is the position of a certain language in the entire evolutionary process?

The reason why I ask this question is not to let future generations lament how wise we were once a hundred years later, but to find the backbone of evolution. It will inspire us to choose those languages ​​that are close to the trunk, which is most beneficial to current programming.

Whenever possible, choosing an evolutionary backbone may be the best solution. Too bad if you choose the wrong one and become a Neanderthal From time to time your opponents, the Cro-Magnons, will come to attack you and steal all your food.

That’s why I want to find out what programming languages ​​will be a hundred years from now. I don’t want to bet wrong.

The evolutionary path of programming languages

Any programming language can be divided into two major parts: the set of primitive operators (playing the role of axioms) and the other part other than the operators (which, in principle, can be expressed in terms of primitive operators).

In my opinion, basic operators are the most important factor in the longevity of a language, other factors are not decisive. It’s a bit like when you buy a house you should think about location first. There are ways to make up for problems in other places in the future, but the geographical location cannot be changed.

It is not enough to choose the axiom carefully, it must also be controlled for its scale. Mathematicians always think the fewer axioms the better, and I think they hit the mark.

You look at the kernel of a language and think about which parts can be left out, which is at least a useful training. Over my long career, I’ve found that redundant code leads to more redundant code , not just software, but for lazy people like me, I’ve found that the proposition holds true under the bed and in the corner of the room, too. One piece of trash creates more trash.

My judgment is that those programming languages ​​with the smallest and cleanest kernels exist on the evolutionary backbone . The smaller and cleaner a language’s kernel is designed, the more resilient it is.

CSDN paid download from Visual China

“A hundred years later, people still use similar programs to command computers”

Of course, guessing what programming languages ​​people will be using a hundred years from now is a big assumption in itself. Maybe a hundred years from now, humans will stop programming, or just tell the computer what to do, and the computer will do it automatically.

So far, though, computer intelligence hasn’t made much progress. I guess a hundred years from now, people will still be using similar programs to command computers. There may be some problems that we need to program to solve today that don’t need to be programmed then, but I think there will be a lot of the same programming tasks as today.

You might think that only self-righteous people would predict what technology will be a hundred years from now. But don’t forget that the history of software development has gone through fifty years. In these fifty years, the evolution of programming languages ​​has actually been very slow, so looking forward to the language in a hundred years is not an ethereal idea.

The reason programming languages ​​evolve slowly is that they are not really technologies. A language is just a way of writing, while a program is a strictly rule-compliant description of how a computer should solve your problem in writing. So, the rate at which programming languages ​​evolve is more like the rate at which mathematical notation evolves than the rate at which real technology (such as transportation or communication technology) evolves. The evolution of mathematical notation is slow, gradual, not leaps and bounds like real technology.

No matter what a computer will look like a hundred years from now, we can basically conclude that it will be much faster. If Moore’s Law still holds true, a hundred years from now, computers will be 74 times 10 times the 18th power (73 786 976 294 838 206 464 times to be exact). It’s unimaginable.

But the more realistic prediction is not that the speed will increase so much, but that Moore’s Law will eventually fail. Whatever it is, if it’s doubling every 18 months, it’s likely to hit its limit eventually. But there is probably no doubt that computers back then were much faster than they are now. Even if it ends up being only marginally a million times faster, that would substantially change the ground rules of programming. Other things being equal, languages ​​that are now considered slow (that is, not very efficient) will have more room to grow in the future.

At that time, there will still be applications that require very high speed. Some of the problems we want computers to solve are actually caused by the computer itself. For example, the speed at which a computer can process videos depends on another computer that generates those videos. In addition, there are some problems themselves that require infinitely fast processing power, such as image rendering, encryption and decryption, analog operations, and so on.

What is important in a future-oriented programming language?

Since in reality some applications are inherently less efficient and others use up all the computing power provided by the hardware, having faster computers means that programming languages ​​have to deal with more corner cases, covering A wider range of efficiency requirements. We’ve seen this happen. By the standards of decades ago, some popular applications developed in new languages ​​are a staggering waste of hardware resources.

However, waste can be divided into good waste and bad waste. I’m interested in good waste, i.e. simpler designs for more money. So the question becomes how to best “waste” them by taking advantage of the more powerful performance of the new hardware.

The pursuit of speed is a deep-rooted desire in human beings. When you look at the gadget of a computer, you can’t help but hope that the program runs as fast as possible. It really takes a lot of work to restrain this desire. When designing a programming language, we should consciously ask ourselves when we can give up some performance for a little convenience.

The reason many data structures exist has to do with the speed of the computer. For example, many languages ​​today have both strings and lists. Semantically, a string can be understood more or less as a subset of a list, where each element is a character. So, why do you need to separate strings as a data type? It is absolutely possible not to do so. It’s just for efficiency that strings exist. However, this kind of behavior that greatly complicates the semantics of the programming language in order to speed up the operation is not desirable. Programming language setting strings seems to be an example of premature optimization.

If we imagine the kernel of a language as a collection of some basic axioms, it must be a bad thing to add redundant axioms to the kernel just to improve efficiency without bringing about an improvement in expressiveness. Yes, efficiency is important, but I don’t think modifying the language design is the right way to improve efficiency.

The correct approach should be to separate the semantics of the language from the implementation of the language. There is no need for both lists and strings semantically, just a list is sufficient. And do a good job of compiler optimization in the implementation, so that it treats strings as continuous bytes when necessary.

CSDN paid download from Visual China

For most programs, speed is not the most critical factor, so you usually don’t need to worry about this kind of hardware-level micromanagement. This has become increasingly apparent as computers get faster and faster.

In terms of language design, fewer restrictions on implementation also make programs more flexible. Changes in language specifications are not only unavoidable, but reasonable. Through the processing of the compiler, software developed to previous specifications will run as usual, which provides flexibility.

The programming language that programmers in a hundred years from now will need most is one that allows you to write the first version of your program effortlessly , even if it’s terribly inefficient (at least as we see it today). They will say that what they want is a programming language that is easy to learn.

Inefficient software does not equal bad software . A language that makes programmers do nothing is really bad. The real inefficiency is wasting the programmer’s time instead of the machine’s time. This will become more and more apparent as computers get faster.

I think it ‘s an acceptable idea to give up the string type . The Arc language already does this, and it seems to work well. Some operations that were difficult to describe with regular expressions in the past can now be expressed very easily with regression functions.

How will this flattening trend of data structures develop? I tried so hard to imagine the possibilities that I got results that surprised even myself. For example, will arrays disappear? After all, an array is only a subset of a hash table, and its characteristic is that the keys of the array are all integer vectors. Further, will the hash table itself be replaced by a list?

There are even more astonishing prophecies than this. In fact, there is no logical need to set a separate representation for integers, because they can also be regarded as lists, and the integer n can be represented by a list of n elements. This does the math just as well, but it’s unbearably inefficient.

Will programming languages ​​evolve to the point of dropping integers, one of the basic data types? I ask this not really to ask you to seriously think about this issue, but more to open your mind to the future. I’m just presenting a hypothetical situation: what would happen if an irresistible force encountered an immovable object. Specifically for this article, what happens when an unimaginably inefficient language meets unimaginably powerful hardware. I don’t see anything wrong with dropping integer types. The future is quite long. If we want to reduce the number of fundamental axioms in the language kernel, let’s look a little further and consider what happens if the time variable t tends to infinity. A hundred years is a good indicator, and if you think an idea might still be unacceptable after a hundred years, maybe it will still be unacceptable after a thousand years.

Let me be clear, I don’t mean that all integer operations are implemented with lists, but that the kernel of the language (which does not involve any compiler implementation) can be defined as such. In reality, any program that does math might represent numbers in binary, but that’s an optimization by the compiler, not part of the language’s kernel semantics.

Another way to consume hardware performance is to place many software layers between the application software and the hardware. This is also a trend we have seen, with many emerging languages ​​being compiled into bytecode. Bill Woods once said to me that, as a rule of thumb, software runs an order of magnitude slower for each additional layer of interpretation. However, the extra layers of software can make programming flexible.

Arc language Very slow, but does bring corresponding benefits. Arc is a typical “meta-loop” interpreter, developed on top of Common Lisp, much like the eval function defined by John McCarthy in his classic Lisp paper. The Arc interpreter is only a few hundred lines of code, so it is easy to understand and modify. The version of Common Lisp we use is CLisp, which is itself developed on top of another bytecode interpreter. So, we have two layers of interpreters. The top layer is surprisingly inefficient, but the language itself is usable. I admit it’s barely usable, but it does work.

Even for applications, developing with multiple layers is a powerful technique. A bottom-up approach to programming means dividing software into layers, each of which can act as a development language for the layer above it. This approach tends to result in smaller, more flexible programs. It’s also the best route to the holy grail of software — reusability. By definition, languages ​​are reusable. With the help of a programming language, the more your application is developed in this multi-layered form, the better it will be reusable.

The concept of reusability is more or less related to the rise of object-oriented programming in the 1980s. No matter how hard you look for evidence, it’s impossible to completely separate the two things. Some software developed using object-oriented programming is indeed reusable, but not because it uses object-oriented programming, but because its development method is bottom-up. Take function libraries, for example, they are reusable because they are part of the language, not because they use object-oriented programming or other programming methods.

“Object-oriented programming will not die in the future”

By the way, I don’t think object oriented programming will die in the future. I don’t think this method of programming actually brings many benefits to good programmers except in certain specific areas, but it has an irresistible appeal to large companies. Object-oriented programming gives you a way to develop spaghetti code sustainably. By constantly patching, it allows you to make the software bigger and bigger. Large companies always tend to develop software this way. I expect the same in a hundred years from now.

Since we are talking about the future, it is better to talk about parallel computing , because it seems that parallel computing exists for the future. No matter how you think about it, parallel computing seems to be part of life in the future.

Will it materialize in the future? For the past two decades, people have been saying that parallel computing is coming, but so far it hasn’t had much of an impact on programming practice. is this real? Chip designers already have to take it into account, as do programmers who develop system software for multi-CPU computers.

However, the real question is, what level of abstraction can parallel computing achieve? Will it affect programmers developing application software in a hundred years? Or is it just something compiler authors need to think about, and is simply nowhere to be found in the application’s code?

One possibility is that people will abandon parallel computing in most situations where it can be used. While my general prediction is that future software will squander most of the added hardware performance, parallel computing is a special case. I reckon that with the staggering improvements in hardware performance, if you explicitly say you want parallelism, you can definitely get it, but usually you won’t be using it. This means that, except for some special applications, parallel computing in a hundred years will not be that kind of large-scale parallel computing. I would expect that for the average programmer, it would be more like duplicating a process and having multiple processes running in parallel in the background.

This is something that is done very late in programming, and it belongs to the optimization of the program, similar to when you want to develop a specific data structure to replace the existing data structure. The first version of a program usually ignores the various benefits that parallel computing provides, just as programming starts by ignoring the benefits that a particular data structure brings to you.

Except for some specific application software, parallel computing will not be very popular in a hundred years. If the application software is really heavy use of parallel computing, it is premature optimization.

How many programming languages ​​will there be in a hundred years?

How many programming languages ​​will there be in a hundred years? In recent times, a large number of new languages ​​have appeared. Improved hardware performance is one reason, allowing programmers to make different trade-offs between speed and ease of programming depending on the purpose of use. If this is the way of the future, then a hundred years from now, powerful hardware will only make the number of languages ​​more numerous.

However, there may be only a few languages ​​in common use a hundred years from now. Partly based on my optimism, I believe that in the future, if your work is really good, you may choose a language that is convenient to develop. The first versions of software written in this language were slow, and only improved when the compiler was optimized.

CSDN paid download from Visual China

Now that I have this optimism, I have one more prediction to make. There is a huge gap between some languages ​​that can reach the maximum efficiency of a machine, and others that are just slow enough to run. I predict that a hundred years from now, there will be corresponding programming languages ​​at every point in this gap.

Because this gap is growing, performance profilers will become increasingly important. Currently, performance analysis has not received much attention. Many still seem to believe that the key to faster programs is developing compilers that generate faster code. The gap between code efficiency and machine performance is increasing, and we will see more and more clearly that the key to improving the speed of application software is to have a good performance analyzer to help guide program development.

I said that there may be only a few common languages ​​in the future, but did not count the “niche languages” used in specific fields. I think the idea of ​​these embedded languages ​​is good and will flourish, but I judge that these “niche languages” will be designed to be a fairly thin layer, so that users can see at a glance the general-purpose language that underlies the foundation. , which reduces the learning time and the cost of use.

Who will design these future languages? One of the most exciting trends of the past 10 years has been the rise of open source languages ​​such as Perl, Python, and Ruby. Language design has been taken over by hackers. Whether this is good or bad so far is unclear, but the momentum is encouraging. Perl, for example, has some wonderful innovations. However, it also contains some really bad ideas. This is also normal for a language that is aggressive and exploratory. At its current rate of change, only God knows what Perl will be like in a hundred years.

There is a saying that if you can’t do it yourself, become a teacher. This doesn’t hold true in language design, where some of the best hackers I know are professors. However, being a teacher does have a lot of things people can’t do, and research positions impose some limitations on hackers. In any academic field, there are some topics that can be done and others that cannot be done. Unfortunately, the distinction between these two types of topics is usually based on how deep they look when written, rather than whether they are important to the development of the software industry. Perhaps the most extreme example is literature, where any achievement of a literary researcher has little or no effect on the literary creator.

While the state of science is a little better, the intersection between the topics that researchers can do and the ones that can help in designing good language is frustratingly small. (Orin Shivers has been vocal about this, and rightly so.) For example, there seems to be an endless supply of papers on variable types, despite the fact that statically typed languages ​​don’t seem to really support macros (in my opinion , a language doesn’t support macros, it’s not worth using).

New languages ​​appear more in the form of open source projects than research projects. This is a development trend of language. Another trend is that the designers of new languages ​​are more application software authors who need to use them themselves, rather than compiler authors. This seems to be a good trend and I expect it to continue.

Physics a hundred years from now is basically impossible to predict, but computer languages ​​are different. It seems theoretically possible to start designing a new language now that will appeal to users a hundred years from now.

One of the ways to design a new language is to just write the program you want to write, whether or not a compiler exists or hardware that supports it. This is assuming that there are infinite resources at your disposal. Whether today or a hundred years from now, such an assumption seems plausible.

What program should you write? Whatever, as long as it allows you to write it out with the least effort. Note, however, that this must be done without your thinking being influenced by the programming language you are currently using. This influence is pervasive, and it takes a lot of effort to overcome. You may think that it is only natural for lazy creatures like humans to like to write programs in the least labor-intensive way. But in fact, our thinking may often be limited to an existing language, only in a form that seems simpler to that language, and its binding effect on our thinking can be shockingly large. New languages ​​have to be discovered on your own, not on the mind-sets that let you sink in.

It is a useful technique to use the length of a program as an approximation of how much work it takes. The program length here is of course not the number of characters, but the total length of various syntactic elements, which is basically the size of the entire parse tree. It may not be possible to say that the shortest program is the least labor-intensive program to write, but when you focus on writing your programs concisely rather than loosely, you are closer to the goal of saving effort, and your life will be much easier. So, the proper way to design a language becomes, look at a program and ask yourself if you can write it a bit shorter.

The programming language used in 100 years should exist now!

In fact, the reliability of writing programs in an imaginary language a hundred years from now depends on whether your estimates of the language kernel are accurate enough. Regular sorts, you can write now, but it’s hard to predict what libraries a language will use a hundred years from now. It’s likely that many libraries target areas that don’t even exist yet. If SETI@home (Translator’s Note: SETI@home is a scientific experiment to find intelligent life beyond Earth, initiated and hosted by the University of California, Berkeley. It uses radio telescopes to listen for radio signals in space, and then uses computers to analyze the data, If it is found that some signals cannot be generated naturally, the existence of alien civilizations can be proved. In 1995, the project decided to open to volunteers to use a large number of computers networked around the world for distributed computing, and it officially started operation in May 1999.) Program To succeed, we need a library of functions to communicate with aliens. Of course, if the alien civilization is highly developed to the point of exchanging information in XML format, then there is no need for a new library of functions.

At the other extreme, I think today you can design a language kernel a hundred years from now. In fact, according to some, most language kernels were designed in 1958. (Translator’s Note: The first edition of the Lisp specification was published in 1958)

If we could use a programming language a hundred years from now, would we program in it? See the past and know the present. If today’s programming languages ​​were available in 1960, would people use them back then?

In some respects, the answer is no. The hardware that today’s programming languages ​​depend on didn’t exist in 1960. For example, in a language like Python, proper indentation is important when writing, but computers in the 1960s had no monitors, only printer terminals, so it wouldn’t be smooth to write. But if you take these factors out of the way (you can assume, we only program on paper), would programmers in the 1960s like to program in today’s languages?

I think they will. Some unimaginative people, deeply influenced by the ideas of early programming languages, might find it impossible. (How do you copy data without pointer arithmetic? How do you implement flowcharts without goto statements?) But I think the smartest programmers back then could easily use most languages ​​today, assuming they could get their hands on it.

If we can have a programming language a hundred years from now, it can at least be used to write good pseudocode. programming code). Will we use it to develop software? Because a programming language a hundred years from now needs to generate fast code for some applications, it is likely that it will generate code that will run on our hardware with acceptable speed. We may have to optimize the language a bit more than the users in a hundred years from now, but overall it should still give us a net benefit.

Now, our two points are:

  1. A programming language a hundred years from now could theoretically be designed today;

  2. If such a language could be devised today, it would probably be suitable for programming now, and would produce better results.

If we connect these two viewpoints, some interesting possibilities emerge. Why not start now and try to write a programming language that will be a hundred years from now?

When you design your language, it pays to keep this goal in mind. When learning to drive, one principle to remember is to drive straight, not by aligning the body with a dividing line drawn on the ground, but by aiming for a point in the distance. This is correct even if your target is only a few meters away. I think we should do the same when designing programming languages.

The text and pictures in this article are from CSDN

loading.gif

This article is reprinted from https://www.techug.com/post/a-hundred-years-later-what-language-will-people-use-to-develop-software-complimentary-book-at-the-end-of- the-article/
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment