The great promise of "serverless" computing is that it will remove the headaches of configuring and administering the environments that run websites, web applications, and APIs. Having seen the future, it looks a lot like the past.
Back in the (fairly recent) good ol' days of "shared hosting", deploying a website typically meant connecting to an FTP server and uploading your site's files and voila! — website served. Shared hosts often did the potentially tricky stuff for you, like setting up your domain name's DNS to point to the shared hosting setup or installing an SSL certificate. All of this meant that while you might pay a premium for certain things like an SSL certificate, from the perspective of trying to deploy a website, shared hosting just worked.
So, what happened? Shared hosting had a lot of items in the "cons" column, but there were three major problems that really stuck out:
First, it was typically insecure. This insecurity started with the method of transmission, typically plain old FTP without encryption, and permeated all the way through the solution down to the fact that the websites were all sharing not only the same hardware but the same instance of the operating system as well.
Second, it didn't scale. Traditional shared hosting providers just stacked up websites on a machine, often to the point of overload where the service provided to each individual website would degrade.
Third, it wasn't customizable. Because the machine was shared amongst a bunch of customers, you weren't allowed to log in and just install things that you needed. This restriction meant that if you needed something that wasn't provided, you were typically out of luck.
Next came the Virtual Private Server, which tried to fix at the least and third problems. The VPS provided isolation between customers running on the same machine, by providing each customer with their own little section of the machine running its own instance of an operating system. But, they were typically expensive, limited in how they could be set up (according to the preordained architecture configured by the provider) which lead to issues scaling, and they pushed the burden of configuration and ongoing management back on to the person or company trying to host the website.
Now, if you wanted to run a website on a VPS, you often needed to be a sysadmin as well as a developer of websites. There were, of course, often additional services you could pay for to help with these types of issues, but they, again, increased the cost. But this didn't help with the fact that the configuration of the VPS was often fixed, making scaling difficult.
If you wanted security, scalability, and customizability all at the same time, you either needed to host your own equipment or choose a provider that offered dedicated hosting, which provided hardware that was dedicated to a single customer and often allowed the customer to specify other hardware requirements, such as load balancing. The first option was a lot of work (and could be expensive), and the second option was expensive.
Enter cloud providers, such as Amazon Web Services. They took the benefits of dedicated hosting (security, scalability and customizability) and made comparable hosting less expensive and more accessible by using virtual machines to share hardware between tenants (customers) while isolating them, and automating the process of creating end-to-end hardware solutions based on specifications provided by their customers. (If you really want, you can use "dedicated" machines through AWS).
The downside? Setting up complicated infrastructure is ... complicated. Network configuration, NATs, load balancers, firewalls, ports, software configuration, etc. It requires a lot of knowledge, and a lot of ongoing maintenance. AWS has created other services, such as Elastic Beanstalk, that can help configure a lot of their services, but it's still really paramount to have a good handle on what's happening under the hood because the maintenance of the solution and the virtual machines is still largely up to the customer (again, barring the purchase of fully managed services).
A technology called "containers" has tried to provide a method for repeatably configuring the software (e.g., operating system) components of solutions, even across cloud providers, but the customer is still often involved in the management of containerized solutions, which makes them less than ideal.
Finally, we arrive at serverless computing.
There's some debate as to what technologies actually constitute serverless computing. For instance, if you search the web, you'll find discussions of PaaS (Platform as a Service) vs. FaaS (Functions as a Service) vs. BaaS (Backend as a Service).
My opinion is that serverless simply means that developers don't have to deal with servers, and architecture that achieves this level of abstraction can come in many forms. Defining serverless in this manner brings us back to the original discussion of shared hosting: serverless is the future, and it's the past. It brings back the ease of deployments that existed before the hosting industry forced developers to often double as sysadmins, and it mitigates the negatives that accompanied that initial ease by improving security, scalability and customizability.
Is it the future? I believe so. Of course, your conclusion may differ based on the technologies you utilize, your stack choices, and the serverless platforms that you use.
In this article, I'll discuss serverless on AWS, utilizing their API Gateway and Lambda services, and running ASP.NET Core, which is a cross-platform framework. Lambda provides access to .NET Core as one of its supported languages, which means that Lambda functions can be authored in .NET. You could, therefore, create an API in API Gateway and route each API function to the appropriate Lambda function authored in .NET Core.
Another, and in my opinion, better option is to utilize an AWS-provided library, AspNetCoreServer, which enables developers to turn a .NET Core application (whether an API, or full-fledged web application, HTML and all) into a serverless application with the addition of just a few lines of code to any standard .NET Core application.
This library is responsible for enabling a Lambda function in the .NET Core application that proxies requests to and from the Kestrel web server so that a Lambda-deployed .NET Core application can take full advantage of the ASP.NET Core pipeline.
Based on my experiences utilizing these technologies to deploy a serverless .NET Core application on AWS (using a standard API Gateway setup and non-VPC Lambda), I've reached a number of conclusions.
(Disclaimer: Details described in this article are subject to change, these are my experiences, opinions and conclusions, serverless (or certain implementations of serverless) may not be appropriate for your use or particular scenario, and your experiences, opinions and conclusions may vary. You are solely responsible for your own actions or inaction and any consequences that arise.)
Pros:
Cons:
I think that the slowness is, in part, an issue with slow cold start times related to the .NET Core framework. I have hope that this issue will improve in future releases. For instance, .NET Core 2.1 (not yet available on Lambda as of the writing of this article) appears to have made many performance improvements and tiered compilation looks promising.
Additionally, there doesn't appear to be a way to really impact how and when Lambda decides to spin up a completely new instance of a function to respond to a request. Since a single visit to a full web application can spawn many requests simultaneously, a single page load can apparently lead to cold starting requests for some elements on a page, which in turn can lead to slow response times for certain elements of a page.
In my opinion, the general outlook for serverless .NET Core on the AWS platform is promising, and the potential upside huge. If some of these wrinkles were worked out, it could truly be spectacular. I'm hopeful that, with time, they will be, and I'm looking forward to a serverless future.