In the next-generation front-end language dispute, JavaScript will be overtaken by the new language?

Author|Nicholas Yang

Translator|Nucle-Cola

Planning|Chu Xingjuan

If you were writing front-end code, which programming language would you choose? At present, there are three most promising players: first, the most conventional JavaScript, then a language that can be compiled into WebAssembly (Wasm), and finally a language that can be compiled into JavaScript.

Regular JavaScript requires the least amount of companion tools, but at the cost of being cumbersome to debug and less readable code. Although the threshold for choosing JS is indeed low, except for the iron fans who are obsessed with “minimalism”, I personally think that this option can only be said to be average.

There are more and more languages ​​that can compile to Wasm, but they are still new in general. These languages ​​tend to come with a lot of binaries, because most of them require additional runtimes. Interop is far from mature. In addition, even if two languages ​​​​can be compiled into Wasm, it does not mean that they can interoperate well. Furthermore, the ecological reserve of this camp is far inferior to the JavaScript DOM library accumulated for decades. On the Wasm side, React and Svelte should be the best options. Don’t get me wrong, I’m not saying bad things about Wasm. It already has its own stage of performance. If you want to run high-computational native code in the browser, Wasm is the perfect option. But if this is not the case, I personally do not recommend using it for daily front-end development.

All that’s left is a language that compiles to JavaScript. But this camp has formed a dominance situation, the boss of which we will discuss in detail later. In contrast, languages ​​such as ClojureScript, Elm, ReScript, and Dart have formed quite specific communities, but it is unknown whether the market share can be further expanded in the future. This is very embarrassing, after all, the language that can be compiled into JavaScript basically represents the best programming experience on the browser. With their support, we can not only enjoy good features that JS does not have, such as static typing, strong typing, invariance, macros, etc., but also support JS and its extensive ecosystem through bindings. Also, they don’t require large, unwieldy runtimes.

Due to the existence of Wasm, I suspect that the JS compilation camp will have reservations. After all, many people think that the former is the best compilation target on the browser. I actually don’t agree with this point of view, the more languages ​​that can be compiled into JavaScript, the better. In short, I would like to use this article to talk to you about the current and possible future front-end languages ​​and which direction they should develop.

Is TypeScript okay?

This is the “boss” in the JS compilation camp I mentioned earlier – TypeScript. TypeScript is a great language that dramatically improves the developer experience. It also adds a new layer of security, promotes improved tool quality, and greatly lowers the barrier to entry. Considering how thriving the ecosystem is and how well it handles the JS type-checking conundrum, TypeScript is a remarkable feat indeed.

Of course, there are also many criticisms against TypeScript that deserve attention. The first is the performance and soundness of the language. It should be noted that the TypeScript team is actually very aware of these two stubborn illnesses, and their roots are the clear trade-offs made by the development team at the beginning of the project. In my opinion, these trade-offs were the right choices to make at the time in order to improve execution efficiency.

Having said that, performance is indeed the most criticized issue of TypeScript. TypeScript is self-implementing, and this implementation is very complex. Its type system is itself a mini-language, which makes type checking extremely slow.

The second issue is soundness. The discussion of this matter is not so hot, but it is quite concerned among the programming enthusiasts. In a nutshell, TypeScript is full of “defects” – allwJs configuration options, any type and intersection type, and its type system cannot guarantee the type safety of the code at all. In other words, the TypeScript we write is likely to trigger runtime bugs. Also, except for extremely simple scenarios, TypeScript lacks reliable type inference, so developers have to explicitly mark type annotations in many places.

But again, these two points are also the result of project trade-offs.

The presence of a bootstrap compiler is crucial for internal testing of TypeScript, helping project developers understand what TypeScript really feels like to use as a language. Specifically, the project team experiences how to write a large JS code base, and gradually adopts the types in the code base. Relaxing a bit on the sanity side allows developers to gradually introduce TypeScript into existing JS codebases, and to easily use the any type to break free of the type system directly.

This part alone is enough to write an article on its own. In my opinion, TypeScript is probably the first programming language that focuses more on the developer experience than its own semantics. It does not add any runtime structure, does not intervene in performance, but adds a type system, and allows the entire language community to accept this ecology that does not require types, does not have high-quality tools, and does not emphasize correctness atmosphere. This is simply an incredible feat.

What is the next generation of front-end language?

All of this goes to show that TypeScript made a number of trade-offs that had a huge impact on itself a decade ago. And as time goes on, I think it’s time to do another round of trade-offs with new languages. Specifically, we need a language with soundness, type inference, and faster compilation.

The requirements are clear, but what should we give in exchange?

soundness

Let’s start with soundness. Instead of trying to type-check various JS patterns, the next-generation language compiles code to JS with a simpler type system in the form of an independent language. It interops existing JS code with external objects, performs explicit runtime type checking on JS code, and relies on different native languages ​​for implementation.

why? First, I personally particularly like languages ​​that have a type system that is both robust and relatively simple. I hope the language works well in the browser and fits smoothly into the existing web ecosystem. Languages ​​that compile to Wasm often ignore the rest of the web ecosystem in favor of native pixel-based UIs in the browser. I think this idea is good, but it is contrary to my concept. I just want to develop regular websites in a next-gen language; I don’t want a purely functional language, but more of an old-school flavor of C (sorry, Elm!); Thoughts on design.

So why should the next generation of front-end languages ​​be born at this point in time? As the saying goes, the best time to plant a tree was ten years ago, followed by now. The JS community has changed a lot in this decade. People start learning TypeScript and also get used to focusing on the compiler and modeling data through types. Many developers are now using languages ​​such as Rust, Swift, and Kotlin, and realize the importance of high-quality tooling. I’m not saying that ten years ago people would resist a language that emphasized type safety, but it was indeed more difficult to popularize at that time.

The requirements are clearly expressed. Some friends may think that this is not ReScript/ReasonML? Yes, there are indeed some similarities. But ideally, the next-generation language I’m looking forward to should have explicit runtime type checking of JS code and features. Runtime type checking is a prerequisite for good interoperability, so that we can more easily use JS libraries at will.

Likewise, I think traits are important to users, and they map to other language features, such as Java interfaces and C++ concepts. This is very convenient, such as easily outputting any type through the Display trait. This kind of requirement sounds simple, but it can greatly improve the usability of the language, eliminating particularly persuasive questions such as “how should I output this?” or “why does + represent integer addition and +. represent floating-point addition?” Furthermore, I also want to get rid of some useless things, such as objects, linked lists, polymorphic variants, etc. These are things ReScript/ReasonML can’t do, and the last time I tried it, I wasn’t impressed with ReScript’s development experience and error messages.

That said, I wouldn’t rule out the possibility that ReScript represents the right direction. After all it’s been a few years since I last tried it, maybe I’m remembering wrong, maybe it’s gotten better. And with the spinoff from OCaml, ReScript is indeed a good front-end language option, and I need to double check.

type safety

For the next generation of front-end languages, I hope to achieve type safety in a more systematic way. Specifically, I think the way Rust handles unsafe code blocks is a good way to achieve JS interoperability. Basically, we need to wrap the code in an unsafe code block during calls to JS. This will be a clear sign to remind developers to read this code carefully. The next goal is to implement bindings on these unsafe code blocks pointing to JS libraries. At first this process will need to be done manually, but tools like bindgen and cxx should follow.

It might seem counter-intuitive to use unsafe code blocks in JS, after all, JS is not as safe as C. But many people don’t seem to realize that the meaning of security is not limited to security itself. The so-called security refers to the ability to use a value arbitrarily without worrying about whether it is null. Safety is the ability to guarantee variability without introducing bugs or confusion. Rust’s concept of unsafe blocks allows users to maintain their own enclave while interacting with large chunks of unsafe code. The next generation of browser languages ​​should do the same.

As for runtime checking, I think it’s still worth the money. We already do a lot of schema validation in JS, but it used to be done only through ad-hoc mechanisms like zod. In the next generation of front-end languages, such features might be automatic conversion of language types on runtime errors, or pattern matching of JS values.

For WebAssembly, I am still very optimistic about its development prospects. But to say that it will definitely become a universal runtime for browsers, I am still skeptical. Maybe my attitude will change in the future, but at the moment I see Wasm more as a hardware accelerator.

We use Wasm when our computationally intensive tasks call for fixed-width integers and static functions; just as we choose GPUs when we need to perform parallel computing. In such a model, I see the potential to support heterogeneous compilation – where some code can be compiled to JS and another part can be compiled to Wasm. This work can be done explicitly by the user, automatically by the analysis, or even on the fly. By controlling both JS and Wasm code, the compiler can minimize the number of times language boundaries are crossed, resulting in improved performance levels. I think in the future there might even be some mechanism to send parts of the code to WebGPU.

On top of such a model, perhaps we can more easily write computationally intensive programs, such as machine learning models, video games, and rendering software.

This concept of separate compilation of Wasm and JS can be reflected in the next generation of front-end languages. I’d like to have explicit integer and float types, and preferably an explicit index type like usize in Rust. This allows the new language to take advantage of Wasm’s fixed-width integers if the code needs to be compiled to Wasm.

Another possibility is to create a subset of the language where dynamic features like closures, garbage collection, etc. are integrated to improve Wasm compilation quality. To interact with this subset, developers need to use unsafe code blocks, such as strict blocks, or let the subset interact with external code through dynamic blocks. These are hypotheses, but I think there is value in exploring them.

Implementation

This new language may be implemented in Rust. After all, I am personally a fan of Rust, and believe that algebraic data types, relatively higher code performance, limited but available variability, and a relatively rich library combination are enough to support a good compiler.

If Wasm develops well enough that performance is close to native, I would also consider using a language subset that compiles to high-speed Wasm code to bootstrap the compiler. But this shouldn’t be a rush, after all, a Rust compiler should be enough for many years.

Summarize

As you may have noticed, the type safety and Wasm parts are actually taking inspiration from systems languages ​​(such as unsafe concepts and hardware acceleration) and applying them to browser-based languages. This is by design, after all, many of the most interesting programming languages ​​are derived from the system level. I just wish these good ideas would make it to the browser as well.

Here I want to clarify that the next-generation front-end language I am referring to is by no means a single language. I hope that multiple languages ​​can go hand in hand and explore together in the direction mentioned above. I want to inspire more friends to continue to innovate in the field of browser languages. Of course, I’m personally involved, and I’m currently working on an implementation called vicuna, but it’s still in a very early stage.

The text and pictures in this article are from InfoQ

loading.gif

This article is transferred from https://www.techug.com/post/in-the-fight-for-the-next-generation-of-front-end-languages-will-javascript-be-overtaken-bef24798a931e66e92ba6/
This site is only for collection, and the copyright belongs to the original author.