1.0 Preface
I am currently working in a financial technology company that specializes in providing (1) financial trading data and (2) macro-asset allocation solutions. The company is developing a “Macro Portfolio” system to support other departments, such as Macroeconomic Analysis, Trading Systems, Risk Management, Financial Deep Learning, Cybersecurity Engineers, etc. The new “Macro Portfolio” system will be used by the company’s financial services department to support the business of the financial services industry.
The new “Macro Portfolio” system is to comply with the “Least Effort Principle”, which includes (1) agile development
and changes to respond to the volatility of the financial markets in the VOCA era, and (2) agile deployment
of the project to reduce the time wasted in communication with other departments, and implementation of automation work.
However, the current development engineer (yes, that’s me!) is uncomfortable developing a “macro-portfolio” system across multiple departments. Functional requirements from the macroeconomic analysis department, functional feedback from the risk management department, code changes from the financial deep learning department, and code review from the cybersecurity engineer department.
After two weeks of “DIVE DEEP investigation” and meetings with a “single leader” in each department, the real issues were (1) the project took too long to deploy, and (2) automated deployment was not achieved.
If we can release the functionality in GrayRelease first, and give the “Updated API Manual
” to other departments to try before every Thursday, we can increase the overall project development speed by +30%. In other words, customers will be able to experience the new “Macro Portfolio” functionality in 10 days instead of 13 days.
Therefore, we decided to deploy the Macro Portfolio system using the AWS DevOps pipeline to accelerate the entire “Prototype->Development->Deployment->Use->Feedback->Modification
” project lifecycle.
“Customer Obsession” is matter! – Amazon 16 leadership principles
1.1 Goals of the AWS DevOps Pipeline
In October 2024, I took a week to read Akshay Kapoor’s [AWS Senior Cloud Infrastructure Architect] AWS DevOps Simplified: Build a solid foundation in AWS to deliver enterprise-grade software solutions at scale. Then I understood what Raymond Tsang [AWS Senior Technical Trainer] told me at the Hong Kong Re: Invent Recap in February 2024 when he said: “There is no absolute right solution, so even if you use only a small portion of AWS services and the result is better, then that’s a good solution.”
Now, I totally agree with Raymond Tsang [AWS Senior Technical Trainer]. This “AWS DevOps Pipeline” architecture was a success in results, and magically, only a few AWS services were used. For example, we didn’t use Instance Auto Scaling
, AWS ECS
(Elastic Container Service), AWS ELB
(Elastic Load Balancing), AWS CloudFormation
, etc.
“Invent and Simplify” is a matter! – Amazon 16 leadership principles
“**Any damn fool can make it complex. it takes a genius to make it simple.**” – Ray Dalio, Principles
In my experience, success is due to the following: (1) Other departments want small features in small increments, not complete solutions. (2) More simplicity means more understanding of the problem’s root cause. So (3) it’s faster and more efficient to “Deliver Results”, even if it’s just a small portion of AWS services.
Because the financial services industry is specialized, each department is responsible for different goals to help customers get value. Financial DevOps should not be a limitation or obstacle, but rather a way to better serve other departments and customers in different situations.
At the same time, I understood that “Simplify & Insist on the Highest Standards” is not a conflict. Although we simplified the whole project, simplicity is a good result of the Highest Standards because we performed (1) a “DIVE DEEP investigation” and (2) understanding the root cause of the problem.
Although you may not believe it, in emergency situations, we use Excel to calculate the Black-Scholes model, because Excel is the fastest and easiest way to solve emergency problems, and it is also the best for “Customer Obsession” and “Deliver Results”.
1.2 “AWS DevOps Pipeline” Architecture
The services used in the AWS DevOps pipeline:
- GitHub Actions
- AWS CodeDeploy
- Amazon EC2
- IAM
Walkthrough:
- The development engineer commits the code via
GitHub Push
GitHub Actions
trigger workflowsIAMROLE_GITHUB_ARN
authorizes access to AWS resources.GitHub Actions
triggersAWS CodeDeploy
AWS CodeDeploy
triggers deployment toAmazon EC2
instancesAWS CodeDeploy
pulls Github resources and deploys toAmazon EC2
instances
1.3 Optimization
This architecture is only suitable for Agile delivery and development environments that are currently “running small steps quickly”. If it is used in a production environment, we need to use Instance Auto Scaling
, AWS ECS
(Elastic Container Service), AWS ELB
(Elastic Load Balancing), etc.
In addition, to make it easier for you to understand the “AWS DevOps pipeline” mechanism, in the following tutorials, Python
and Backtrader
are removed from the application layer, while just a simple Nginx
and static web pages
are used.
1.4 Applying GenAI Tools - Amazon Q in Financial Services DevOps
Amazon Q
is a perfect GenAI Chatbot development tool.
In September 2024, I found an anomalous charge on my AWS bill, but I didn’t know why EC2 EIP became a paid service.
So I asked Amazon Q
and in just 10 seconds I knew the answer. The idle EIP is charged.
- Select
EC2
-> Elastic IP addresses -> Network & Security - Select Elastic IPs
- Delete the idle EIPs
In addition, I used Amazon Q
to learn about AWS DevOps. The following is my experience with Amazon Q
while applying the knowledge from AWS Certified Machine Learning - Specialty Certification exam.
Since AWS Pipeline
is an AWS-Centric service, integrating GitHub Actions
does not meet the Least Effort Principle.
Therefore, AWS Pipeline
is not the best approach.
I know that AWS Pipeline
is AWS-Centric Orchestrates and that it consists of CodeBuild
and CodeDeploy
.
I asked Amazon Q
, knowing that CodeDeploy
was the AWS service I needed and would be the best Least Effort Principle solution with GitHub Actions
.
Reference Articles:
After I read the tutorial on the AWS DevOps blog, it is already very similar to my solution although it is AWS EKS
.
Through Amazon Q
, I quickly understand the differences and similarities of each AWS service and apply it more productively in my daily work. Therefore, I highly recommend using AI tools for productivity.
1.5 Summary
I shared the current situation in the financial services industry and then applied the AWS 16 Leadership Principles, Akshay Kapoor and Raymond Tsang’s insights on solving business and technical pain points through AWS cloud services.
Finally, summarize the key points of this chapter:
1.5.1 Principles
- The new “Macro Portfolio” system is to comply with the “Least Effort Principle”, which includes (1) agile development, and (2) agile deployment
- The real issues were (1) the project took too long to deploy, and (2) automated deployment was not achieved
- Success is due to the following: (1) Other departments want small features in small increments. (2) More simplicity means more understanding of the problem’s root cause.
1.5.2 Action
- Give the “
Updated API Manual
” to other departments to try before every Thursday - Simplicity is a good result of the Highest Standards because we performed (1) a “DIVE DEEP investigation” and (2) understanding the root cause of the problem
1.5.3 AWS DevOps
- The development engineer commits the code via
GitHub Push
GitHub Actions
trigger workflowsIAMROLE_GITHUB_ARN
authorizes access to AWS resourcesGitHub Actions
triggersAWS CodeDeploy
AWS CodeDeploy
triggers deployment toAmazon EC2
instancesAWS CodeDeploy
pulls Github resources and deploys toAmazon EC2
instances
In the next chapter, I’ll share (1) building an AWS DevOps pipeline, and (2) the estimated cost of AWS cloud services. I hope you all grow together in the AWS community.