All hops Introduction In this post, we will walk through the infrastructure components we use at Blend to secure incoming requests—a day in the life of a request, if you will. There are a variety of commonly-used mechanisms to secure cloud computing environments, which often involve load balancers and special-purpose proxy servers. As a result, requests from a client to an application server typically make a number of intermediate network hops en route to their final destination.
Note: Finprint isn’t under active development, but we think the lessons we learned are still valuable for other developers evaluating blockchain platforms. We recently open-sourced Finprint, a data sharing protocol that aims to empower consumers with the ability to own and securely share their financial data. Finprint’s goal of bringing control and transparency to consumers was a natural fit for the decentralized nature of blockchain platforms. We built our first implementation of the protocol on Ethereum in the Solidity smart contract language.
Security certifications are table stakes for Blend. Of course, this is also true for other organizations in critical infrastructure spaces like financial services, healthcare, and government contracting. Proof of a comprehensive security compliance program is often necessary to sell your product or services, and the audits that precede certification can be costly in terms of fees, time, and lost opportunities to improve other components of your security program.
At Blend, we make extensive use of Kubernetes on AWS to power our infrastructure. Kubernetes has many moving parts, and most of these components are swappable, allowing us to customize clusters to our needs. An important component of any cluster is the Container Network Interface (CNI), which handles the networking for all pods running on the cluster. Choosing the right CNI for each use case is critically important and making changes, once serving production traffic, can be painful.
At Blend we have been pushing for Kubernetes adoption across all services for the last two years. Migrating our monolith from AWS ECS to a self-hosted Kubernetes cluster marked a major milestone. Moving business-critical applications in general requires deliberate planning and in many cases major updates to deployment pipelines, system monitoring, testing, and infrastructure. This post will explore the migration strategies and lessons learned as we got the monolith up and running across deployments with zero downtime.
At Blend, we deal with highly sensitive consumer financial data. We use several data stores — Postgres, MongoDB, CockroachDB, and Etcd — all of which need to be backed up. While MongoDB and Postgres give us prebuilt tools for encrypting backups, Etcd and CockroachDB do not. Our standard practice is to encrypt these backups before storing them. This became more challenging as our backups grew. Encrypting backups in memory At the beginning the backups were small, and we were able to use Vault’s transit features to encrypt them.
If you’re not familiar with Blend, think of us as a modern experience for getting a loan. We offer a guided, personalized front end application that makes it easy for borrowers to connect their account data and more structured and secure for lenders to process it. We process loan applications for over 130 financial institutions including Wells Fargo and US Bank. To make the borrowing experience seamless, we integrate behind the scenes with our customers’ in-house tech stacks and dozens of third-party vendors.
At Blend, we’re working to bring simplicity and transparency to consumer lending. In the last two years, the Blend engineering team has doubled from 50 engineers to more than 100. Unsurprisingly, our codebase has grown in size and complexity as well. As the team has grown, we’ve embraced the principle of distributed ownership. On the UI team, this means owning our own technical health, and, as of a few months ago, our own release.
At Blend, we offer a white-label consumer lending platform that streamlines the otherwise manual, paper-based, and generally painful borrowing process. One challenge inherent in our business model and industry is serving a diverse set of lenders with a single product — from small independents to the largest banks — each with different levels of comfort in accepting changes to the product. Some lenders want the latest functionality as soon as it’s available, while others prefer to test every user-facing change in our beta environment for a month or more before allowing it to be promoted to production.
Here at Blend, we recently shifted to a multitenant paradigm for our core application. That is to say we moved from a paradigm where a single instance of our app served traffic from a single customer to one where a single instance can serve any number of them. Why didn’t we start that way? If you have a system where customers need to interact with each other, multitenancy is necessary from the start.