WebAssembly. Scary name, exciting applications.
What
After HTML, CSS, and JavaScript, WebAssembly is the fourth official language of the web.
WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications.
I think that quote just won buzzword bingo. (And it used to be even more confusing, calling C/C++/Rust high-level languages). I’ll try to break it down.
A WebAssembly .wasm
file consists of a bunch of binary instructions, it’s meant for computers and is not human-readable.
Those instructions are not for a physical machine but for a pretend one, a virtual machine.
Normally, binary code is specific to a physical chip architecture and won’t work on different kinds of chips. Because WebAssembly gives instructions to a virtual machine, it will work on all kinds of different chip architectures. ARM, Intel, AMD, you name it, it can execute WebAssembly.
This is possible because the machine WebAssembly targets is the lowest common denominator of popular hardware. Allowing all kinds of physical chips to translate those universal WebAssembly instructions to the ones and zeroes that are specific to that architecture and put specific patterns of lightning into the sand we call a CPU to trick it into doing math.
Compiling to native
Compiling to WASM
The instructions in a .wasm
file are for a stack machine.
Chris Hay has a great primer video on how this works.
The tl;dr is: push things on a stack, do things with the numbers on the stack, pop things off the stack.
The specifics of which are very interesting, but not necessary knowledge. That is common theme in my deep dive into WebAssembly. Knowing how things work is optional, if you don’t want to know details, that’s totally fine because there are a bunch of great tools that abstract them away.
A textual representation of a .wasm
file is called a .wat
file.
That stands for WebAssembly Text Format.
The first letters of which accurately describe my reaction when I first saw a .wat
file.
It’s human readable, but at first glance I thought “not to this human”. (Jake Archibald had a similar reaction).
(module(func $square (param i32) (result i32)local.get 0local.get 0i32.mul)(export "square" (func $square)))
The small snippet above gets the first parameter to a function called square (at index 0, because that’s how computers start counting) and puts it on the stack. It does the same thing again so there are now 2 identical numbers on the stack. Then it gives the multiplication instruction that pops 2 numbers off the stack, multiplies them, and puts the result back onto the stack. Bingo, bango, a function that squares a number.
Surma explains how to read it in much greater detail in his raw wasm blogpost.
While you can write .wat
files by hand and convert those to .wasm
, you probably never will.
WASM is designed to be a compilation target for other languages.
You write in an other language, run some tool, and usable WASM comes out. Right now, the languages with the best support are lower-level languages like C/C++/Rust, because they manage the use of memory themselves and don’t need a garbage collector.
WebAssembly is a misnomer
WebAssembly is “Neither Web, Nor Assembly”.
It’s a language that can run in the browser, but can also run outside of it.
A .wasm
file is a binary file, it isn’t assembly text.
The .wat
format is a textual representation that more closely resembles assembly.
It’s assembly for a virtual machine too, where traditional assembly is directly giving instructions to a physical chip.
Why
Cool, but why should I care? 👇
Fast
WASM goes predictably fast.
To make this point, first let’s take a short dive into how modern JavaScript runtimes work.
The speed at which JavaScript can go is impressive. This is achieved by modern JavaScript runtimes that do just in time compilation.
The Speed, Speed, Speed talk by Franziska Hinkelmann goes into this deeper.
JavaScript has to go through more steps before it ends turned into optimized machine code, each of those steps takes time.
Before JS can get turned into machine code at all, some things have to happen like turning the JS text into a tree-like structure, an AST.
To make optimized machine code, JS has to execute pieces of code first. The JavaScript runtime uses the information it gathered while the JS was executing to generate faster machine code. This process can repeat over and over and results in chunks of fairly efficient machine code after a while.
An additional problem is that while doing this, the engine might make certain assumptions that help execution speed. These assumptions might have held the first 5000 times a piece of code was executed, but aren’t true anymore the 5001th time. If that happens, the runtime falls back to a previous, slower step (the interpreter). This is called a deoptimization.
Combined, this results in unpredictable performance when running JavaScript.
It would be an overgeneralization to say that machine code is always faster than interpreted code (how JavaScript is initially executed), but on average, it’s probably true.
The path to machine code is much shorter for WASM. As soon as the WASM is loaded, it gets turned into machine code.
This translation is fast, so fast that it’s faster than downloading the .wasm
file in most cases.
It’s possible to compile the .wasm
to a baseline machine code as it comes in over the network.
This is called streaming compilation, and can be done in the browser with the instantiateStreaming
method.
Once the WebAssembly is fully loaded, the same optimizing compiler that was used when executing JavaScript kicks in and swaps out the baseline compiled version for the faster, optimized version.
If the compiled WASM comes from a host language like Rust, the WASM runs at the near-native speed of that host language.
How close is the “near” in that statement? I want numbers!
The answer is the very unsatisfying “it depends”, but I’ve heard numbers of about 1.2x - 1.3x native speed on average.
The end result is that WASM takes less time to get going than JavaScript, and once it gets going, it’ll probably be faster.
Below is a bar chart detailing roughly the total time each step takes during the runtime of a file.
That on the web? We’re gonna go faaaaaaaaaaast.
Intermittent fasting
Funny joke, right? I heard it in the context of fast languages that have pauses for garbage collection.
It kinda fits here too, because an other popular usecase is replacing often executed, possibly slow parts of JavaScript with WASM.
This is often done by replacing a function call in your JavaScript app with a call to a WASM function.
The post where Surma details replacing a hot path in your app’s JavaScript with WASM describes this process, it’s really good.
Code reuse
The JavaScript ecosystem is pretty huge, there are a staggering amount of packages on npm. But JavaScript is not the first choice to solve all coding problems.
Many solutions to problems might exist, but have a much more extensive solution in other languages like C or Rust.
Or, those solutions do not exist at all in JavaScript while there are packages in those other languages.
Using WebAssembly, you can tap into an other language’s ecosystem.
Like a picture encoder written 30 years ago in C? Compile it to WASM and put it in the browser! This isn’t a hyperbole, it’s what squoosh.app is doing. They use algorithms from other languages to bring many kinds of image compression to the web.
Google Earth is now available in more browsers thanks to WebAssembly. (Previously, it used a proprietary technology that was only available in Chrome.)
These last 3 examples are huge codebases that didn’t need to be entirely rewritten to target a much broader audience.
And in the hypothetical scenario where they did rewrite everything in JavaScript, it would likely be too slow to be even possible for it to live on the web, bringing us back to the previous point, WASM can go fast.
The web is the largest and most ubiquitous delivery platform in existence. The expansion of the capabilities of the web is exciting.
Small
The binary format of a .wasm
file is very size-efficient.
A lot of information can be stored in a small binary size.
Because a WASM file isn’t usually handwritten, but compiled, it no longer contains any unnecessary duplicate information that was relevant to the language you wrote it in. Ideally, the WASM file contains only the necessary instructions to the virtual stack machine.
Not only that, but a binary file is an excellent target for compression.
.wasm
files are typically served to users gzipd, which often means a 50% reduction in size.
That means less bytes over the wire if we choose to use WASM on the web, resulting in a speedup.
That process is handled by the toolchain of the language you choose to compile to WASM.
This probably means trading a little more compile time for smaller binary sizes though, a tradeoff that is left to the user to decide.
For example: enabling link time optimization, or using a different opt-level
in the configuration file for a Rust project that gets compiled to WASM.
There are also some WASM specific tools that can help even more than the compiler of the language you’re using.
For example: wasm-opt
.
Universal
WebAssembly is designed to run on as much real computers as possible. It does this by targeting a virtual machine, and letting the physical machines handle the translation of those virtual instructions to machine code.
This is often called portability.
This is a very important characteristic if you want to ship code to the web. There, you can’t know ahead of time which type of physical machine your code will run on next. It might be a powerful x86 desktop, it might be an old ARM phone.
The web isn’t the only place where WASM can run though, it has many exciting applications on traditional servers, in the cloud, in IoT devices.
I’d like to change that popular JavaScript quote and give it a WASMy twist.
Any processor WASM can run on, WASM will run on.
Languages like C are also called portable, there the source code is portable because you can compile it for a bunch of different architectures. With WASM, the language you write it in doesn’t matter as long as it ends up as WASM.
Security
This is by far the point I know the least about, but it just might be the most exciting one. It has the potential to open so many doors, I can’t wrap my head around it.
WASM was designed with the web in mind, and you don’t want any web applications to crash your browser, let alone your system.
WASM runs in a sandboxed environment, making it hard for code inside the .wasm
file to reach outside, and the other way around.
Unless you give it specific permission to do so.
These permissions can be finegrained.
You could give file-system access to a module, but only for the folder it’s in.
This means .wasm
(through WASI) fits some usecases Docker is used for today.
And it does it’s thing while being much lower weight.
Here’s a tweet by Solomon Hykes, one of the co-founders of Docker:
If you compile your C code to native code and there’s a nasty pointer error somewhere, that can have dangerous consequences. If you compile to WASM instead, it will run slightly slower, but it will just crash when the pointer is wrong instead of causing an error that can lead to stealing your stuff.
The security story is good, but not perfect and closesly looked at.
A lot of that boils down to: being able to use C code means you can do things that are possible in C, like making a pointer point at the wrong value (but as we discussed, only inside WASM, a pointer to the outside will be invalid). Using a language like Rust mitigates even more of those risks, but of course, nothing is 100% safe.
This opens up the door to executing untrusted code on your machines without having to worry that it will cause problems.
When
now!
caniuse.com puts it at 95% of global users for web browsers at the moment of writing.
That’s only a bit less than CSS Grid, and more than CSS pointer-events
or JavaScript BigInt
.
It’s quite unprecedented that so many large vested interests in the web (Google, Apple, Microsoft, Mozilla) came together and agreed on a new standard.
How
How does WASM work?
WASM is very bare-bones, that means that for fairly simple functionality, a JavaScript file can be smaller/faster than WASM module, because the JS can leverage all the code the JavaScript runtime (like v8) already provides.
In WASM, if you need something, that code has to be included in the .wasm
file.
Examples are: Dynamic memory allocation, using strings, or an array implementation.
Oh. You used a method on that array? Better include the logic for that too.
Luckily, that’s all code you don’t need to write yourself, language specific tools like wasm-bindgen for Rust or emscripten for C handle that,
meaning you can write code as usual, and they make sure the needed bits are included to use the .wasm
file.
Because WebAssembly is so bare bones, it doesn’t understand things like strings. It understands numbers, and only numbers!
As a developer you can write code in Rust and interact with it from JavaScript. This is made possible by so called “glue code”.
This glue code is also code you don’t have to write yourself. You call functions from the JavaScript glue, and it makes sure WASM does the right thing.
An example of a Rust program with a function that returns a String
:
The Rust strings have to be turned into numbers that WebAssembly can understand and work with.
That’s done by some glue code on the WASM side.
If you now want to call that function you wrote in Rust, you do it through the JS glue code that was generated.
The JS glue calls into WebAssembly which will return a bunch of numbers.
Those are then translated into a string that JavaScript can understand.
That’s done by some glue code on the JS side.
The end result is you calling a function from JS, and getting a string back. But under the hood, there are lots of numbers happening!
use wasm_bindgen::prelude::*;#[wasm_bindgen]pub fn hello() {"Hello world".to_string()}
const { hello } = require("path/to/JS/glue");console.log(hello());
Visualized in a diagram, like the one from the Gentle introduction to WebAssembly talk by Colin Eberhardt.
If this all sounds like something like this happened before, it’s because it did. The JVM is an example of something similar to WASM. The reasons for creating a new thing with WASM instead of going down the JVM road again are very technical (like the security story, how it is compiled,…).
How do I use WASM?
That’s all neat, but I just want to write some Rust code, turn it into WASM and use that on my blog, how do I make that happen?
That’s why I created a wasm-examples repository.
It has a bunch of examples of writing in an other language, using tools, and making a website or server do WASMy things!
For example: Writing a program in Rust, compiling it to WASM, and using the output in a GatsbyJS site (like my blog is at the moment)
Demo
The main event, the reason I decided to make a blogpost about WASM. I wanted to put the output from the rust-wasm tutorial on my blog.
It’s Conway’s game of life.
The Rust part is fairly similar, but the JavaScript is not.
You can judge my scrappy JavaScripting chops on GitHub because my blog is opensource.
For simplicity I decided to publish the Game of Life logic written in Rust as an npm package and use that. (Because I didn’t want to redo my blog’s folder structure like the example repo I made.)
It should run a bit faster than the example in the rustwasm book because of some optimizations I made. If you want to hear about those, ping me on Twitter. You can also look at the code, because I share what I’m up to while learning Rust.
The driver logic is implemented in the same thing my blog uses, React.
The calculation logic where the new state of the board is calculated is written in Rust, and compiled to WASM with wasm-pack.
The React part uses the glue code wasm-pack provides to call into the WASM.
Extras
Jamstack and the edge
The WASM model maps incredibly well to the Jamstack model.
To me, the most important aspect to Jamstack is time shifting. In other words: doing work ahead of time.
In traditional Jamstacky static sites, that means doing as much work as possible ahead of time and shipping pre-rendered HTML pages to users. WebAssembly is like that, but for a programming language. Compiling partly ahead of time instead of entirely just in time like JavaScript does.
Do more work ahead of time, so there has to be less work done when the code you wrote is being used.
Serverless is also a big usecase for WASM.
Running a .wasm
file directly means many languages can be used to create that file, and by extension, that serverless function.
A WASM module can be as lightweight as it has to be, that’s good for startup times. Those times have long been a sore point for serverless functions, and have become shorter and shorter as time goes on.
But for serverless functions written in JavaScript, there’s only so low you can go. The JavaScript runtime has to start up before any JS code you wrote can get executed, and that takes time.
So, what if we break that startuptime floor?
It turns out you can put JS inside WASM and make those startup times go poof. This comes at the cost as maximum execution speed, but as the vast majority of serverless functions have a very low runtime, the startup time is very significant.
An other very interesting talk by Lin Clark (Making JavaScript on WebAssembly fast) explains a bit more how this works, and why this works.
Serverless functions are cool. But I’m a greedy person and always want more/better.
Right now, I’m using serverless functions through the “big dog” in the industry, AWS. More specifically, a datacenter in Virginia called US-East-1 is executing most of the serverless functions I write.
I want to write code, and have it be executed much closed to users, by using an edge network. Edge server are tiny servers all over the world that typically serve static files like HTML, CSS, and JS to users.
But they can execute code too. They’re not as powerful as full-blown server farms, but are much closer to where data has to go.
Executing serverless functions there makes more sense than executing them in the United States regardless of where that function is being used from.
That’s what Fastly is doing with Terrarium, and what Cloudflare is doing with Cloudflare Workers.
More examples
Figma is a React app that uses WASM to do the computationally intensive logic.
The Facebook picture uploader uses WASM to do some quick transformations to images.
The original DOOM on Cloudflare’s edge network.
Microsoft Flight Simulator has WASM plugins.
Disney+ application development.
Blazor lets C# run on the web through WASM.
…many more I didn’t include here.