We started with a plain SVG object that was animated with different libraries. As we wanted to have 3 objects on one page, the result was not that satisfying. All the animations were just slow – all paths of a single SVG object had to be updated in really short periods of time, which made the entire page as slow as a snail. We had to reject the solution with pure SVG inserted into a document. That left us with two other solutions to choose from.
video element was the second option. We started with two problems:
- transparent background, which cannot be applied with the most popular video formats such as .mp4 or .webm,
- responsivity, which was a real problem because videos are not scalable as such. We decided to keep this solution on the back burner – "if we don't find anything else, we will pick this one".
The last option was to use
WebGL rendering. It was such an unusual option because we had to design all the rendering mechanics by ourselves. That's because the morphic waves we have were custom ones – that forced us to design a custom solution 😎 And that was the option that we wanted to follow and really focus on.
Architecture of solution
Let's start from scratch. What was the material that we had to use to build these waves? The idea was that all the waves were a SVG file of size 1x1 and specific paths positioned around this area. The animation of this SVG was build by some forms of states to this file. So, all the animations were represented as a set of files that contained the stages of moving a shape.
Take a deeper look of what the states are – all the paths are just a kind of an array with specific values positioned in specific order. Changing those values in specific positions within this array changes the shape in its specific parts. We can simplify this with the following example:
state 1: [1, 50, 25, 40, 100] state 2: [0, 75, -20, 5, 120] state 3: [5, 0, -100, 80, 90]
So, we can assume that the shape we want to render consists of array with 5 elements that are changing with the linear easing in specific periods of time. When the animation finishes the last stage, it starts back with the first one to give us the impression of an infinite animation.
But... wait – what exactly is the array presented above? As I mentioned, it is a path that is responsible of displaying a specific shape. All the magic is included in the
d property of SVG's path. This property contains a set of "commands" to draw a shape and each command has a sort of arguments. The array mentioned above consists of all the values (arguments) attached to these commands.
The only difference between these "state files" were values of specific commands as the order of commands was the same. So, all the magic was about getting all the values and animating them.
The wizard called Physics
In the paragraph above, I mentioned that the only magic in animating an object is the creation of transitions between all the stages of a shape. The question is – how to do that with canvas?
The function that everyone who worked with
canvas should know is requestAnimationFrame. If you see this for the first time, I sincerely believe you should start by reading this. So, the thing with this function that we are interested in is the argument –
DOMHighResTimeStamp. It looks really terrifying but in practice it is not so hard to understand. We can say that it is a timestamp of elapsed time from the first render.
Ok, but what we can do with this? As the
requestAnimationFrame function should be called recursively, we can access a time delta between its calls. And here we go with the science! ⚛️ (ok, maybe not rocket science... yet 😃)
Physics teaches us that the delta of a distance is equal to the delta of time multiplied by velocity. In our case, velocity is constant because we want to reach the end point in a specific period of time. So, let's take a look at how we can represent it with the above states:
Let's say that we want to transition between these states in one thousand milliseconds, so the velocity values will be the following:
delta: [ -1, 25, -45, -35, 20] velocity: [-1/1000, 25/1000, -45/1000, -35/1000, 20/1000]
The velocity above tells us: for each millisecond let's increase the value by -1/1000. And here is the point where we can go back to our
requestAnimationFrame and time delta. The value of a specific position we want to increase by is time delta multiplied by the velocity of its position. One more thing to achieve without a problem is to limit the value so as not to exceed the state it is going to.
When the transition ends, we call another one and so on. And it does not seem to be so hard to implement but one of the main rules in software development is not to spend time on things that are already implemented. So – we picked a tiny library that allows us to create these transitions in an effortless way.
That's how we created one animated wave that looks like a living creature.
A few words about cloning shapes
As you can see, The Codest brand waves are not a single animated figure. They consist of many figures with the same shape but a different size and position. In this step, we will take a quick look on how to duplicate in such a manner. So, the canvas context allows us to scale drawing area (under the hood – we can say that it multiples all the dimensions passed into drawable methods by "k", where "k" is a scaling factor, by default set to "1"), make canvas translated, it's like changing the anchor point of a drawing area. And we can also jump between these states with these methods: save and restore. These methods allow us to save the state of "zero modifications" and then render specific number of waves in the loop with properly scaled and translated canvas. Right after this, we can go back into the saved state. That's all about figure cloning. Much easier than cloning sheep, isn't it?
Cherry on top
I mentioned that we rejected one of the potential solutions because of performance. The option with canvas is pretty fast but nobody said that it couldn't be optimized even more. Let's start with the fact that we don't really care about transitioning shapes when they are outside the browser viewport. There is another browser API that programmers loved – IntersectionObserver. It allows us to follow specific elements of page and handle events that are invoked when those elements appear or disappear from viewport. Right now – we have a pretty easy situation – let's create the state of visibility, change it due to IntersectionObserver event handlers and simply turn the rendering system on/off for specific shapes. And ... boom 💥 the performance has improved a lot! We are rendering the only things that are visible on viewport.
Picking a way to implement things is often a hard choice, especially when the available options seem to have similar advantages and disadvantages. The key to making a correct choice is to analyze each of them and exclude those we see as less beneficial. Not everything is clear – some one solution requires more time than the others but it may be optimized easier or more customizable.
Although new JS libraries are appearing almost every minute, there are things that they cannot resolve. And that's why every front-end engineer (and not only them) should know browser APIs, keep up with technical news and sometimes just think "how would my implementation of this library look like if I had to do this?". With more knowledge about browsers, we can build really fancy things, make good decisions about tools we use, and become more confident about our code.