How we improved the performance of our vector maps

Ganesan Senthilvel
Ganesan Senthilvel

There are two primary types of spatial data—namely raster and vector. Raster data is made up of a grid of pixels; vector data is composed of vertices and paths. Those two types of data are reflected in mapping solutions, as shown below.

vector and raster formats

Our design challenge was to build faster vector maps. In particular, we needed to overcome performance challenges. Let’s dive into how we did it, using a W’s approach.


In general, the objective of any software performance engineering is to build predictable performance into systems. That is achieved by specifying and analyzing quantitative behavior from the very beginning of a system, through to its deployment and evolution.

In this case, every vector tile request has a 2-step process.

  1. Request validations and subsequent signed URL generation.
  2. Redirect to fetch the specified vector tiles with 302 redirection.

vector path flowchart diagram

When we started, the end-to-end vector maps rendering process took roughly between 800 to 1300 milliseconds, which was too slow and needed improvement.


By design, the potential latency fix might occur in one of three layers:

  1. Network layer
  2. AWS Infra layer
  3. Application layer

In our context, those layers are represented below:


  1. The Network layer is related to the OSI (Open Systems Interconnection) reference model over the system network.
  2. AWS Infra layer covers the latency from, to and within AWS components.
  3. Application layer is directly related to the developers’ code for rendering vector maps.


As the first step, we targeted the Application layer as it is the most controllable section. After refining a few caching mechanisms and logger usage, the system didn’t gain major performance results. These low-hanging technical solutions didn’t provide much improvement.

On troubleshooting with few instrumentation measures, the big fish is caught at the infrastructure layer.

infrastructure layer

As depicted above, the application layer is consistently performing with lower 2 digits of milliseconds during multiple experimentation runs/methodologies. But the endpoints between the network and app containers are consuming the major portion of the performance.


After narrowing down the layer we wanted to improve, we decided to experiment with the modern AWS component (Global Accelerator) as a replacement for AWS cloud front for our performance scenarios.

Influencing factorsCloudFrontGlobal Accelerator
Performance coverageImproves performance for both static and dynamic HTTP(S) contentImproves performance for a wide range over TCP or UDP
Static/dynamic IPsUses multiple sets of dynamically changing IP addressesLeverages a fixed set of 2 static IP addresses
Edge Location usageUses edge locations to cache the contentUses edge to find an optimal pathway to nearest endpoint

AWS global accelerator really gains a huge win on the end-to-end performance with few experimentation cycles. (You can read more about AWS global accelerator in this blog post .)

By design, it is quite logical due to Top-3 infra architectural influencing elements.

Thus, our performance problem is resolved with ~67% gain. Overall, this was a great learning experience that not only improved our vector mapping performance, but also taught us a lot about the value of experimentation.

Share this article:

You May Also Like

Everything We Did to Make This Site Fast

| By Christian Oliff

A detailed rundown on the tools and techniques we used to optimize page-load performance.

How to Help Guide Developers Through Your API Documentation

| By Chris Johnson

Tips on how to go beyond just documenting API endpoints and parameters.