![5 Serverless Architecture Patterns You Should Stop Using (And What to Do Instead)](https://static.wixstatic.com/media/2e76fb_9684506f50884807ada1056e667048a6~mv2.png/v1/fill/w_49,h_26,al_c,q_85,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/2e76fb_9684506f50884807ada1056e667048a6~mv2.png)
In this opinionated post, I’ll share five Serverless patterns that I no longer use and explain why. You'll also discover the Serverless best practices that have replaced them, helping you build more efficient and scalable Serverless applications.
Table of Contents
AWS Serverless Architecture Patterns - So Many Options
The beauty of AWS Architectures and Serverless, in particular, is that, in most cases, multiple solutions exist for the same problem, each with its pros and cons.
Over the years, as I gained more development and design experience, my preferred architecture patterns evolved. I experienced serverless production challenges and realized that we all make mistakes and that changing your opinion is more than okay.
Let's review five patterns that I used in the past but don't use anymore. This doesn't mean you shouldn't use them; you should open your eyes to their limitations or issues.
Number One: API Gateway Direct Integration
I'm going for the shocker right from the get-go: API Gateway direct integrations.
You can connect an "AWS integration" to an API Gateway path and connect API Gateway to many AWS services directly.
It lets an API expose AWS service actions. You must configure both the integration request and integration response and set up necessary data mappings from the method request to the integration request, and from the integration response to the method response. - AWS Docs
Many of you know this pattern in another form: "storage first pattern," which is a good interpretation of this capability where your API Gateway can save a request payload to DynamoDB or SQS without a Lambda function in between. The pattern removes the risk of a Lambda function error, added cost, and potential cold start.
It sounds good on paper, but then reality hits. First, it isn't easy to define these mappings, and you can't test them in the IDE as they happen strictly on the API Gateway side. You must use end-to-end tests and deploy changes (see my Serverless testing guide series here) which slow down the development cycle. Second, you hit the limitations fast.
Let me share an example from work.
I tried to use this pattern today at work (Feb 4th time of writing) because I wanted a simple API that quickly writes data to DynamoDB. But then I realized that I need a more complicated authorization, so using just an IAM authorizer isn't going to cut it; I need a Lambda authorizer. Second, I realized I needed a more complex input validation than API GW. Third, observability is limited, IÂ prefer using Lambda's logs. Lastly, retry and failure handling don't exist, or at least I can't control it; API GW will fail, and that's it.
So, I just put a Lambda function instead that did all the above and provided a better fit to my requirements.
However, if your use case fits, you find the UX ok, and you DON'T expect more requirements that will cause you to rewrite the entire thing with a simple Lambda; by all means, go for it!
TL;DR While API Gateway direct integrations can reduce cost and latency in specific cases, they shouldn't be your default choice. A Lambda function usually provides better flexibility, error handling, observability, and security.
TL; DR number two:Â Step Function direct integrations are pretty great.
Do you use this pattern?
Yes
No
Number Two: The Monolith Lambda
Lambdalith, monolith Lambda—there are so many names for one giant function that handles all your API desires. Think of the FastAPI experience but for Lambda; Powertools for AWS Lambda provides an excellent solution out of the box - but should you do it?
When I started Serverless, this was all I knew. We used Chalice. Chalice handles all the IaC wirings and provided a FastAPI development experience. One Lambda function to rule them all. And it worked, until it didn't match out requirements, and we refactored.
Let's cover the pros and cons of the Lambdalith approach.
Pros:
Faster deployment, just one function.
Easy to develop and beginner-friendly, feels similar to FastAPI.
Cons:
Cost and Memory—If one API path requires more memory to complete its flow effectively, all paths get it, too, as they are all the same function - extra cost and inefficient.
Security - function's role has read/write/delete permissions in cases of CRUD API. It needs to be able to do everything, not least privilege, which is an AWS best practice.
Deployment risk - one misconfiguration and your entire API is down instead of one path.
Scale—you can't limit the concurrency of one path or another; it's the same function. You might get throttling on high-priority paths due to high traffic on low-priority path.
What's the alternative? Micro lambda function—A lambda function per API path, each with a smaller purpose and domain.
Pros:
Scale per function - you have more control.
Security - least privilege. Each function has a role with the minimum required permissions.
Less deployment risk.
Optimize memory per the requirements of each function - pay extra only when required.
Cons:
It is more complicated to set up.
More resources to create mean a longer deployment time.
So, which pattern should you choose?
I will risk sounding like a generic AWS solutions architect and answer, "it depends."
If you are new to Serverless or working on a POC or side project, by all means, use a monolith Lambda. However, if you are working on a soon-to-be production service, do yourself a favor and learn the micro function way. We ended up rewriting the entire service and added multiple functions due to the cons I specified.
If you don't know where to start, use my Lambda Handler Cookbook Serverless Template project or any project mentioned in the Awesome Serverless Blueprints project. Use other people's experience and do it right from the beginning!
Do you use this pattern?
Yes
No
Number Three: Directly Invoking a Lambda Function
Yet another pattern I've followed in the past. I was so sure I had done something useful, but looking back in hindsight, I didn't.
In this pattern, one Lambda function invokes another Lambda function using the other function's name in the AWS SDK. It's most likely done in the context of the same account, too.
There are two invocation types: asynchronous and synchronous.
With asynchronous, your sin is increased coupling and not having a proper interface between your micro-services.
In addition, you get both deploy-time and runtime coupling—the invoked function name passed as an environment variable, for example, and runtime coupling as your function requires the other function's name to invoke it. If, for some reason, someone changes the function's name or moves it to the container, your other service will break—not great!
With synchronous invocation, the sin is even greater; your invoking function waits idly until the other function returns a response. You pay extra but gain nothing.
That's why APIs (private or public) or event-driven architectures (queues, event buses, pub/sub) exist—to abstract and decouple services from one another to some degree and provide a solid interface for error handling, input validation schemas (OpenAPI for example) and integration with other useful patterns like dead letter queues and redrive (relevant to queues).
So, why did I do it? I thought it was a good idea to separate the domains between the two functions, so I followed the micro Lambda function rule, which states that it has one purpose or domain. However, both functions belonged to the same code repository, so the distinction was meaningless, and I paid extra for no good reason. I ended up refactoring and removing the invoked Lambda altogether. I moved the code to a shared logic module so the original function would use it as part of its logic layer.
To learn more about Lambda architectural layers, read my post "Learn How to Write AWS Lambda Functions with Three Architecture Layers."
TL;DR: Seriously, don't do it.
Do you use this pattern?
Yes
No
Number Four: Write All Code in the Handler
If you wrote a non-Serverless service, would you write all the code in one file or split it into modules?
The answer might surprise you: it depends. I discussed this subject back at AWS re:invent 2023 in my session with Heitor Lessa (see vide below) about how we write and test Python serverless services.
The short explanation is that if you write a small function, like a cron job handler, it might be overkill to write it in hexagonal/architectural layers pattern.
But for any other use case, you should write your code in layers:
Handler (initialization of configuration and global variables and input validation)
Logic (business logic)
Data access layer (dal) - interface for interacting with an external service or database.
hexagonal/architectural layers
Each architectural layer (not to be confused with Lambda layers) is separate, has a clear responsibility, can be tested in isolation, and even replaced. In one of my services, we had to replace DynamoDB with Aurora. Since we wrote the DynamoDB integration via a DAL layer and abstracted DynamoDB, it will be relatively simple to replace it with Aurora.
Check out my AWS Handler Cookbook Template repository to see a concrete implementation of these principles.
and If you want the longer answer in a video form, here's my re:invent session at the spot where I discuss it:
TL;DR: Do it in some cases, but mostly don't.
Do you use this pattern?
Yes
No
Number Five: Use the Wrong EventBridge Service for Scheduled Tasks
This one is short and sweet.
If you need a scheduled task, use the EventBridge scheduler. Don't use the older EventBridge rules. AWS didn't deprecate the option to use rules with a schedule pattern, which is a true "customer obsession" act, but that doesn't mean you should continue to define them. I've seen many developers copy-paste infrastructure as code definition from their previous service that use rules instead of using the better, cheaper EventBridge Scheduler.
To learn more about EventBridge Scheduler and why it's awesome, check out my blog post, "Build AWS Serverless Scheduled Tasks with Amazon EventBridge and CDK."
And while we are discussing EventBridge, use EventBridge Pipes; they are awesome.
Do you use this pattern?
Yes
No
Summary
In this post, I share some lessons I've learned over the course of developing Serverless services over five years. Don't forget: It's my opinion; use it wisely, or don't!