I have been wanting to write an architecture post for some time, as I have always been interested in Software Architecture. This post is not on any architectural style in particular, but rather the cons and pros of the different styles and the quality attributes that are found within them. We will start by defining Software architecture as::
"The set of structures needed to reason about the system, which comprises software elements, relations among them, and properties of both." - from Documenting Software Architectures by Clements et al.
Which is a high-level definition, showing why it is hard to talk about software architecture - it is a very broad topic! What this post will describe is what quality you need to put into the elements, relations and properties of your software architecture mentioned in the above definition.
The quality attributes covered in this post are:
Why not just prescribe a specific architectural style?
Instead of delving into different architectural styles we will rather focus on great features and attributes of good software architecture. Every year there is a new trend in software architecture and this has accelerated, due to the high availability and convenience of the cloud and the use of docker. As a result, we have shifted away from large monolithic applications towards smaller distributed applications, as they are now easier to handle and maintain. These two approaches are more commonly known as monoliths and microservices.
However the quality attributes of architectures have not changed. Systems still need to be Available, modifiable, performing and testable. Therefore in this post we will avoid pointing you in the direction of a specific "go-to" architecture, but rather give you an idea of what good architecture is.
You will encounter quality attributes in the book "Software Architecture in Practice", these are measurable non-functional requirements that describe intrinsic qualities in software architectures, such as availability, maintainability, performance, reliability, etc.
Wikipedia has an extensive list of Software system quality attributes, here is a few subjectively picked attributes that has great significance in systems. We will start with availability.
Availability or "highly available" is a term used on systems which have an adequate (or better) uptime. A system that is available is: there and ready to carry out tasks when it needs to be. This does not necessarily mean that it has a 99.9% uptime. For some systems it is okay to be down for some time, such as the world's stock exchanges which are often only open 8 hours at a time. They only need to be available at a certain portion of the day and are fine to not carry out their tasks outside opening hours. Internal systems in companies may be another example where time registration systems can be down during the weekend.
However many systems do not have this luxury and need to be able to be reached at all hours of the day, every day of the year. We expect many systems and services to be available to us all through the day, when was the last time you saw that Google search was down? Yeah.. it is quite rare!
Being highly available may not mean that you receive the same level of service all the time. An example could be that you may be able to send text messages on Facebook messenger but not images for a while. Messenger as a whole is still available but in a degraded state. A system that can still operate even when errors occur can be considered fault tolerant. The whole system may not be operational but you can still do your main tasks.
Availability is often handled through redundancy. Where a system has several "nodes" that can carry out requests, if one node fails the rest can still handle requests and the system is therefore still available. Having several nodes can also be a scalability tactic to heighten performance (more on that under performance).
Other quality attributes have an impact on availability, such as performance, if a system is not performing it can be deemed unavailable. If a service took over 20 seconds to reply you might think it was unavailable. If the system has to be taken down in order to be modified (modifiability) it would also become unavailable for a period of time. The ability for a system to be modified but still operational touches both availability and modifiability. Reliability and availability are closely tied together, as an unavailable system is an unreliable system. In order for a system to be deemed reliable it has to have high availability. If a system is reliable it produces the correct output given its input.
The quality attribute modifiability can be described as the architectures or its components' receptiveness of change. Often it is measured by how fast a change safely can be developed and moved into production. Therefore it quickly ties into other quality attributes: as our system often have to be available when we make changes, the change has to be swift (of high performance) and adequately tested.
How fast changes can be made is related to how easy it is to reason with the code and make isolated changes (changes with no side-effects). This is also known as clean code, code that is easy to understand, highly coherent and adequately decoupled. Without these, modifying the code base becomes hard to change.
Maintainability and deployability are quality attributes that are close to modifiability:
Maintainability is "the measure of the ability of an item to be retained in or restored to a specified condition". When most hear the term maintainability they think of how fast or easily a fault can be fixed in the production environment of a system. Maintenance could also be how easy the system is to upgrade or simply how easy it is to restart it (this fixes all sorts of issues temporarily).
Deployability is how easily and safe new software can be moved into an environment. This is often practiced as moving binaries or making an image and creating a new container. Prior to deployment automated tests are often run which ensures the quality of what is being deployed. Deployability is closely related with portability, which these days are often handled by using docker.
Why do we make software? Besides producing highly reliable results, software enables us to do our work faster (and in parallel). For example, making a payment these days is almost as fast as you can swipe right. But what makes a system high performing? It is its ability to finish its tasks in an acceptable time. Why within an acceptable time? It is rare that software is too fast and that becomes a problem, often we do not mind fast loading services, webpages or mobile apps. Even though Netflix takes one second to start we do not mind if it does so faster. Performance is therefore often measured and improved when it is no longer adequate for the system.
One way to achieve high performance is to make your system scaleable. One tactic that is often employed is scaling a system horizontally where several identical applications share the load of work - known as horizontal scaling. This requires a mechanism to distribute the load, which is often achieved using a load balancer. A side-effect of this is improved availability as if one node fails another can take over, the loadbalancer becomes a single point of failure but that is often well tested and standard software.
A common approach in monolithic applications is "vertical scaling", this is where a single application is improved upon or more Ram/Cpu/power is added to it. However this is a rare approach these days, as there are limits to how much RAM and CPU you can add and how much you can improve your application.
Besides scaling, a closely related quality attribute to performance is efficiency. Your software may be performing but that does not mean it is efficient. Lack of efficiency often boils down to wrongly designed software, such as databases with lacking indexes or several nested for-loops in an application. Many applications live on without any problems even though they are not efficient, which is due to the performance requirements being low or easily attainable. As mentioned previously it is rare that a system or application is too fast and if it is efficient it is better prepared for the future, should the load on it increase.
Wikipedia describes testability as:
Software testability is the degree to which a software artifact (i.e. a software system, software module, requirements- or design document) supports testing in a given test context. If the testability of the software artifact is high, then finding faults in the system (if it has any) by means of testing is easier.
It is about our softwares ability to demonstrate its faults. When we test we give our application, class, module or whichever level of abstraction we are testing some input and measure its output. We measure it by asserting that it is giving the correct output depending on the input.
We can write and execute (automated) tests on different levels of our system. Most developers are used to writing unit tests, as they are easy to write and fast to execute with no external dependencies required. Some use integration tests where a subset of the system is tested together. Finally there are system-wide tests where the whole system is tested as a whole, compared to unit and integration tests that are often not automated but tested manually. System tests can also be non-functional, such as testing usability, security or efficiency.
Systems that have a high degree of testability have certain characteristics such as:
- Its components can be isolated and their state can be controlled
- Its components have separation of concern
- Its components are highly observable and it is possible to measure their results.
There are many different testability approaches such as mocking, stubbing or using imposters (using test doubles). This is where a part of a system, function or application is run with dependencies meant for testing. These create controlled conditions, such as triggering a flow, providing a specific set of values or responding in a specific way. For example if you wish to test a scenario where an error occurs it is easier to test with a dependency that always fails. If you have time constraints, for example: if a program acts differently after work hours, you would want to control time itself and have an implementation of your date/time library that always states it is after work.
Some call these tests in "isolation", the goal is to test under controlled conditions, the easier this is, the higher the testability of your system is.
Testability is not to be confused with test coverage. Test coverage shows how well the code or system is covered by tests. With a high degree of testability it is easier to get high coverage and opposite with a low degree of testability it is harder or perhaps impossible to cover the whole system.
That is it
I hope you enjoyed my post on my favorite traits in good software architectures and systems. If you think something is missing or if you liked this post, please leave a comment down below.
If you want some further reading on software architecture, check out my list of my favorite software architecture books.
- https://www.codementor.io/learn-development/what-makes-good-software-architecture-101#:~:text=A software can be easy,Good software is reusable.