Looking to level up web development and architecture skills? Understanding the balance between simplicity and complexity in system design can help keep complex projects simple and simple projects from becoming overly complex.
First, it's important to recognize that there's a difference between complex and complicated. While both imply the existence of multiple moving parts, the word "complicated" carries a connotation that the word "complex" does not — mainly that a complicated solution is one in which the solution is overly cumbersome to the point that the solution is also part of the problem.
A solution that is complex can have many well-defined and necessary parts that cohesively function, while a complicated solution consists of too many parts that don't interoperate well, to the point of having a detrimental impact on performance, maintainability or business objectives.
Simple versus complex solutions are better imagined as a spectrum of complexity, rather than an absolute value. Any given solution usually lies somewhere on a scale between simple and complex, relative to other possible methods of implementation.
Solutions typically have a "perceived" complexity and an "actual" complexity. Perceived complexity relates to the number of moving pieces that appear on a system diagram. Actual complexity is derived by taking the apparent level of complexity and then adjusting either up or down based on factors that can either increase or decrease the complexity of a given solution.
For instance, most people would consider a single server that hosts a website as being on the simple end of the complexity spectrum. There are hidden costs to managing a web server, though, including management, patching, monitoring, and remediation of issues.
So, just because a solution appears to be simple, doesn't mean that it actually is — especially at scale. Conversely, a complex solution may appear to be complex but is in reality simple. For example, a solution may be composed of multiple services but those services require little maintenance.
As a result, the simplicity or complexity of a solution isn't a direct reflection of whether that solution is "good" or "bad". Different solutions require tradeoffs, so determining whether a solution is appropriate for a given scenario requires evaluating the solution on a case-by-case basis in terms of business requirements.
Generally, problems arise when "complex" solutions veer into the realm of "complicated" ones. These solutions tend to contribute to poor outcomes in projects when they fail to meet business requirements and introduce additional work or effort for a business to maintain them.
My personal approach to avoiding this situation is to ensure that there's a justifiable reason behind additional levels of complexity.
In code development, there's a concept of the "minimum viable product", which is basically the smallest software product or service that can be produced while fulfilling business objectives. It's my opinion that the same concept can be applied to system design, such that a minimum viable system or solution is the solution with the least level of complexity that satisfies business requirements.
When attempting to design a minimum viable system, it's a good idea to start by examining various business requirements that will likely influence the solution. Those business requirements may include items such as:
Obviously, for many projects, a major factor is cost.
In traditional server-based models, additional complexity typically introduces additional cost. For instance, in a three tier system, a company may deploy web servers, an API server, and a database system, which would likely dramatically increase costs.
In some cloud-based systems, additional complexity doesn't necessarily equate to higher overall costs, especially with the proliferation of on-demand cloud services that bill primarily for "active" use. Typically, these types of services have a higher per-unit processing cost, but for businesses with the right business model, the total cost of these types of services may amount to less than running servers 24/7.
It should be noted that simplicity doesn't always equate to lower costs. With many technologies, the price that vendors charge for simple solutions may actually be higher than expected due to the convenience that the technology delivers.
Performance is a strong factor in designing a system. If speed and uptime are critical to a businesss, those factors will likely influence the design of a system toward greater resource size, higher number of available processing resources, and more redundancies.
If scalability is a concern, it's often far easier to scale up a system that's already built for scaling than it is to replace a system that hasn't been built for scaling from the start. The switching cost from a non-scalable to a scalable system is a hidden cost.
One available compromise for this situation is to design a system that is scalable but minimally scaled, where the infrastructure is in place to scale but the absolute minimum number of resources is deployed.
Whether a business is concerned about duplicating coding efforts may, in part, dictate the complexity of a system. For instance, a three tier system can centralize data access, reduce code duplication, and potentially improve security in exchange for introducing more complexity to a system.
With the proliferation of privacy regulation, security should be at the forefront of factors influencing system design. Implementing more secure systems often requires additional resources to implement separation of concerns between system components, which may increase cost.
Additionally, regulations can have a tremendous impact on the design of a system, including its complexity, due to imposed security requirements. For some industries, regulatory impact can create a high barrier to entry, especially if companies cannot afford to meet regulatory requirements and therefore can't enter the market.
Highly regulated environments may, for instance, benefit from compartmentalizing system components that interface with regulated data, so that different portions of a system can operate at different levels of complexity (where and if such regulations permit).
System design can have a direct impact on processes and operations of a business by either increasing or reducing the amount of manual processes that need to be completed by human personnel. Since people processes (vs machine processes) are typically far less scalable, systems that introduce some added complexity in exchange for reduced human input may provide a net-positive benefit.
Another huge cost of systems is the amount and cost of maintenance required to keep them running smoothly. Interestingly, maintainability is dependent on the implementation of any given system and isn't necessarily related to the complexity of a system.
For instance, a traditional web server will incur maintenance costs of patching, monitoring and remediation. A load balanced cloud solution, on the other hand, while appearing more complex may actually automate tasks like replacing faulty servers.
The perceived complexity of a system can differ from the actual complexity of a system. A system with many moving parts may seem complex (or even complicated) before accounting for factors that can modify the complexity of that system.
Here are some factors that can have a drastic impact on system complexity:
Automation can make the difference between a system that is complicated and one that is complex. Automation can typically occur at the code, system or process levels and generally focuses on turning manual actions into automated ones to reduce the complexity of a given system component.
Managed platforms and services, such as PaaS (platform as a service) or serverless can drastically reduce the complexity of a system — if implemented in an organized manner — by passing along the work of actually maintaining the system to the service provider.
In essence, while the platform or service might actually be complicated behind the scenes, that complexity is abstracted away and hidden from the organization using the platform or service.
It's important to note that certain programming models, such as serverless, can drastically increase complexity and even shift into the complicated zone if haphazardly implemented.
Code-based or configuration-based deployment of infrastructure can drastically reduce the management of complex solutions, enabling them to be deployed faster, repeatedly, and with greater confidence in the deployment (assuming that the various components are properly configured to begin with).
The goal then is to balance the complexity of a solution for a particular problem. With the advent of (and continuing release of new) cloud-based, managed services, and continually improving DevOps capabilities, the line between "simple" and "complex" solutions is becoming more and more blurred. What appears simple can actually be complex, and what appears complex can actually be simple.
Looking forward, the fact that the delineation between the two concepts is diminishing will provide ample opportunity to craft solutions that better fit the individualized needs of a project or service rather than taking a one-size-fits-all approach and trying to fit the project into the system.
When desining a new system, I find that it's best to start small and add what's necessary (while accounting for factors such as performance and scalability, security and regulations) to reach a minimum viable system.