Top 10 Latvia-Based Software Development Companies
Learn about Latvia's top software development companies and their innovative solutions in our latest article. Discover how these tech leaders can help elevate your business.
Many programmers beginning their careers consider the topic of naming variables, functions, files, and other components as not very important. As a result, their design logic is often correct – algorithms run quickly and produce the desired effect, while can be barely readable. In this article, I will briefly try to describe what we should be guided by when naming different code elements and how not to go from one extreme to another.
Let’s assume that you and your team are taking over the code from other programmers. The project you inherit was developed with no love whatsoever – it worked just fine, but its every single element could have been written in a much better way.
When it comes to the architecture, in the case of code inheritance it almost always triggers hatred and anger from the programmers who got it. Sometimes this is due to the use of dying (or extinct) technologies, sometimes the wrong way of thinking about the application at the beginning of development, and sometimes simply because of the lack of knowledge of the programmer responsible.
In any case, as the project time passes, it is possible to reach a point where programmers are raging mad at architectures and technologies. After all, every application needs rewriting of some parts or just changes in specific parts after some time – it’s natural. But the problem that will turn the programmers’ hair gray is the difficulty in reading and understanding the code they inherited.
Especially in extreme cases when variables are named with single, meaningless letters and functions are a sudden surge of creativity, in no way is consistent with the rest of the application, your programmers may go berserk. In such a case, any code analysis that could run quickly and efficiently with correct naming requires additional analysis of the algorithms responsible for producing the function result, for example. And such an analysis, though inconspicuous – wastes a huge amount of time.
Implementing new functionalities along different parts of the application means going through the nightmare of analyzing it, after some time you have to go back to the code and analyze it again because its intentions are not clear, and the previous time spent trying to understand its operation was a waste because you no longer remember what its purpose was.
And thus, we get sucked into a tornado of disorder that reigns in the application and slowly consumes every participant of its development. Programmers hate the project, project managers hate explaining why its development time starts to increase constantly, and the client loses trust and gets mad because nothing goes according to plan.
Let’s face it – some things can’t be skipped. If we have chosen certain technologies at the beginning of the project, we have to be aware that with time they will either stop being supported or less and less programmers will be fluent in technologies from a few years ago that are slowly becoming obsolete. Some libraries in their updates require more or less involved changes in the code, which often entail a vortex of dependencies in which you can get stuck even more.
On the other hand, it is not such a black scenario; of course – the technologies are getting older, but the factor that definitely slows down the development time of projects involving them is largely ugly code. And of course, we have to mention here the book by Robert C. Martin – this is a bible for programmers, where the author presents a lot of good practices and principles that should be followed to create code that strives for perfection.
I think the most accurate representation of some of the statements above will be to show how they work in practice – in this paragraph, I’ll try to outline some bad code practices that can more or less be transformed into good ones. I will point out what disturbs the readability of the code in some moments and how to prevent it.
A terrible practice that is unfortunately quite common, even at universities, is naming variables with a single letter. It is hard not to agree, that sometimes it is quite a convenient solution – we avoid unnecessary thinking how to determine the purpose of a variable and instead of using several or more characters to name it, we just use one letter – e.g., i, j, k.
Paradoxically, some definitions of these variables are endowed with a much longer comment, which determines what the author had in mind.
A good example here would be to represent the iteration over a two-dimensional array that contains the corresponding values at the intersection of column and row.
const array = [[0, 1, 2], [3, 4, 5], [6, 7, 8]];
// pretty bad
for (let i = 0; i < array[i]; i++) {
for (let j = 0; j < array[i][j]; j++) {
// here is the content, but every time i and j are used I have to go back and analyze what they are used for
}
}
// still bad but funny
let i; // row
let j; // column
for (i = 0; i < array[i]; i++) {
for (j = 0; j < array[i][j]; j++) {
// here is the content, but every time i and j are used I have to go back and check comments what they are used for
}
}
// much better
const rowCount = array.length;
for (let rowIndex = 0; rowIndex < rowCount; rowIndex++) {
const row = array[rowIndex];
const columnCount = row.length;
for (let columnIndex = 0; columnIndex < columnCount; columnIndex++) {
const column = row[columnIndex];
// does anyone have any doubts about what's what?
}
}
One fine day, I came across a highly sophisticated code written by a software engineer. This engineer had figured out that sending user permissions as strings specifying specific actions could be greatly optimized using a few bit-level tricks.
Probably such a solution would be OK if the target was Commodore 64, but the purpose of this code was a simple web application, written in JS. The time has come to overpower this quirk:
Let’s say that a user has only four options in the entire system for modifying content: create, read, update, delete. It’s pretty natural that we either send these permissions in a JSON form as keys of an object with states or an array.
However, our clever engineer noticed that the number four is a magic value in the binary presentation and figured it out as follows:
The whole table of capabilities has 16 rows, I’ve listed only 4 to convey the idea of creating these permissions. Reading the permissions goes as follows:
const user = { permissions: 11 };
const canCreate = Boolean(user.permissions & 8); // true
const canRead = Boolean(user.permissions & 4); // false
const canUpdate = Boolean(user.permissions & 2); // true
const canDelete = Boolean(user.permissions & 1); // true
What you see above is not WebAssembly code. I don’t want to be misunderstood here – such optimizations are a normal thing for systems where certain things need to take very little time or memory (or both). Web applications, on the other hand, are definitely not a place where such over-optimizations make total sense. I don’t want to generalize, but in the work of front-end developers more complex operations reaching the level of bit abstraction are rarely performed.
It is simply not readable, and a programmer who can do an analysis of such a code will surely wonder what invisible advantages this solution has and what can be damaged when the development team wants to rewrite it to a more reasonable solution.
What’s more – I suspect that sending the permissions as an ordinary object would allow a programmer to read the intent in 1-2 seconds, while analyzing this entire thing from the beginning will take at least a few minutes. There will be several programmers in the project, each of them will have to come across this piece of code – they will have to analyze it several times, because after some time they will forget what magic is going on there. Is it worth saving those few bytes? In my opinion, no.
Web development is growing rapidly and there is no indication that anything will soon change in this regard. We have to admit that the responsibility of front-end developers has recently increased significantly – they took over the part of logic responsible for the presentation of data in the user interface.
Sometimes this logic is simple, and the objects provided by the API have a simple and readable structure. Sometimes, however, they require different types of mapping, sorting and other operations to adapt them to different places on the page. And this is the place where we can easily fall into the swamp.
Many times, I’ve caught myself on making the data in the operations I was performing virtually unreadable. Despite the correct use of array methods and proper variable naming, the chains of operations at some points nearly lost the context of what I wanted to achieve. Also, some of these operations sometimes needed to be used elsewhere, and sometimes they were global or sophisticated enough to require writing tests.
const devices = [
{ id: '001', version: 1, owner: { location: 'Texas', age: 21 } },
{ id: '002', version: 2, owner: { location: 'Texas', age: 27 } },
{ id: '003', version: 3, owner: { location: 'Arizona', age: 27 } },
{ id: '004', version: 2, owner: { location: 'Chicago', age: 24 } },
{ id: '005', version: 2, owner: { location: 'Arizona', age: 19 } },
{ id: '006', version: 3, owner: { location: 'Texas', age: 42 } },
];
// pretty complicated
(() => {
const locationAgeMap = devices
.map(device => device.owner)
.reduce((map, item) => {
if (!map[item.location]) {
map[item.location] = item.age;
} else {
map[item.location] += item.age;
}
return map;
}, {});
const locationLegalAgeOwnersMap = devices
.map(device => device.owner)
.filter(owner => owner.age >= 21)
.reduce((map, item) => {
if (!map[item.location]) {
map[item.location] = [item];
} else {
map[item.location].push(item);
}
return map;
}, {});
console.log({ locationAgeMap, locationLegalAgeOwnersMap });
})();
// still complicated but more compact
(() => {
const locationOwnersMap = devices
.map(device => device.owner)
.reduce((map, item) => {
if (!map[item.location]) {
map[item.location] = [item];
} else {
map[item.location].push(item);
}
return map;
}, {});
const locationAgeMap = Object.fromEntries(
Object.entries(locationOwnersMap).map(([location, owners]) => {
const ownersAge = owners.map(owner => owner.age).reduce((a, b) => a + b);
return [location, ownersAge];
})
);
const locationLegalAgeOwnersMap = Object.fromEntries(
Object.entries(locationOwnersMap).map(([location, owners]) => {
const filteredOwners = owners.filter(owner => owner.age >= 21);
return [location, filteredOwners];
}).filter(([_location, owners]) => {
return owners.length > 0;
})
);
console.log({ locationAgeMap, locationLegalAgeOwnersMap });
})();
I know, I know – this is not some trivial piece of code that easily illustrates what I want to convey. And I also know that the computational complexities of the two examples are slightly different, while in 99% of cases we don’t need to worry about it. The difference between the algorithms is simple as both of them prepare a map of locations and device owners.
The first one prepares this map twice, while the second one prepares it only once. And the simplest example which shows us that the second algorithm is more portable lies in the fact that we need to change the logic of creating this map for the first one and e.g., make the exclusion of certain locations or other weird things called business logic. In case of the second algorithm, we modify only the way of getting the map, while all the rest of the data modifications occurring in the subsequent lines remain unchanged. In the case of the first algorithm, we need to tweak every attempt at preparing the map.
And this is just an example – in practice, there are plenty of such cases when we need to transform or refactor a certain data model around the whole application.
The best way to avoid keeping up with various business changes is to prepare global tools that allow us to extract information of interest in a fairly generic way. Even at the cost of those 2-3 milliseconds that we might lose at the cost of de-optimization.
Being a programmer is a profession like any other – every day we learn new things, often making a lot of mistakes. The most important thing is to learn from these mistakes, become better in your profession and not repeat those errors in the future. You cannot believe in the myth that the work we do will always be flawless. You can, however, based on the experiences of others, reduce the flaws accordingly.
I hope that reading this article will help you avoid at least some of the bad coding practices that I have experienced in my work. In case of any questions regarding to best code practices, you can reach The Codest crew out to consult your doubts.
Read more:
Data fetching strategies in NextJS
Web app security – XSS vulnerability
Web app security. Target=”_blank” vulnerability
*The title graphic comes from speednet.pl