How to Improve Server Response Times

Server response time, often known by time to first byte (TTFB), is the time it takes the web server to respond to a browser’s request. This includes the HTML, CSS, JavaScript, font files, and any other assets being requested. Reducing the amount of time this takes is a fundamental principle of improving page speed. It will improve all facets of page speed, including Core Web Vitals metrics — Largest Contentful Paint (LCP), First Input Delay (FID), and First Contentful Paint (FCP).

This article addresses techniques that improve server response times and help take your page speed to the highest levels.

Quickly jump to a topic:
Page caching
Enabling Keep-Alive
GZIP compression
Move the database
Content distribution network (CDN)

Page Caching

If your site uses a content management system (like WordPress), an e-commerce server (like Magento), it’s dynamically generating pages. The process works like this:

  1. You visit portent.com
  2. The server receives your browser’s request
  3. The server fetches content from a database
  4. The server compiles that content into a template
  5. The server delivers the HTML result to you

Lots of other sites may dynamically generate pages. Content management systems and e-commerce are the most common.

This five-step process adds up. Even if your site is simple, dynamic page delivery means the server has to hit the database every page load, which increases response time.

In this case, those extra steps slowed page load time by over one second.

Lack of disk caching slowed page 'time to first byte'

There are various solutions, but two are most common: page caching technologies and application-specific disk caching. Both follow the same solution premise in storing the compiled HTML of pages on the disk drive. The only delay is the time required to grab the page from the drive.

It changes the five-step process to this:

  1. You visit portent.com
  2. The server receives your browser’s request
  3. The server fetches content from a database
  4. The server compiles that content into a template
  5. The server delivers the HTML result to you

Page Caching Technology

Varnish

Varnish is a reverse proxy HTTP accelerator developed for dynamic, content-heavy web sites. Varnish caches pages in virtual memory, leaving the operating system to decide what gets written to disk or stored in RAM.

That’s a triple-win:

  1. Page requests don’t require database queries
  2. Page requests sometimes come from memory
  3. Page requests hit a server optimized to deliver cached content

FastCGI Cache

FastCGI Cache is an Nginx module for storing dynamic page content with the capability to omit caching by URL, cookies, request type, and other server variables. It is an alternative to Varnish, but accomplishes the same thing as far as page caching goes. It is incredibly fast and the lightweight configuration makes it an attractive solution.

Memcached

Memcached is an open-source and free distributed caching technology often used to speed up dynamic database-driven websites. Memcached utilizes a server’s RAM by caching data and objects which reduces the number of times an external data source must be queried.

Redis

Redis is an open-source and free distributed in-memory data structure store. Redis is often used to speed up dynamic database-drive websites and is very powerful.

Application-Specific Disk Caching

All legitimate CMSes and e-commerce applications can cache dynamically-generated pages on the server’s local harddrive or disk. In this case, “cache” means “store the generated pages somewhere, so that the server doesn’t have to fetch content from the database and recompile the HTML.”

This solution can still require your server to load parts of the web application to determine if it can pull the request from cache. Thus, we recommend a page caching technology solution, but disk caching should still be used if need be.

Enabling Keep-Alive

Keep-Alive is all about reducing server overhead.

The ‘Keep-Alive’ setting tells the server to maintain a connection between your web browser and the site server while you’re browsing, which reduces round trips. The server won’t have to open as many new connections, meaning less processor, memory, and network overhead.

On HTTPS-only sites Keep-Alive is very, very important. TLS connections require multiple ‘handshakes.’ Keep-Alive means fewer new connections and many fewer handshakes.

GZIP, aka HTTP Compression

Most web servers can compress files before sending them to the browser, which then uncompresses them. That’s called HTTP compression. It reduces the amount of pipe you use. It can be a huge page speed win, and while it does require getting your hands down into the server a bit, it’s simple enough that you can send a quick note to your webmaster and ask them to make the change.

GZIP is a compression utility for any stream of bytes, best utilized on text-based data like HTML, CSS, JavaScript, Fonts, and XML. Compressing content will minimize page load times, reduce the load on the server, and save bandwidth. All modern browsers will request GZIP resources by default, so it is important to make sure your web server has GZIP enabled. While GZIP compression can be handled in multiple ways, it is best done by the web server instead of the programming language.

Here are examples of how to implement GZIP compression for 3 of the more popular webservers: Apache, NGINX, and IIS.

Apache

For Apache, you’ll need to install a module called mod_deflate. Most Apache installations already include it.

Type

apache2ctl -M

at the command line. If you don’t have it installed, a quick web search will provide a ton of tutorials.

Here is a mod_deflate example for GZIP compression:

# gzip compression
AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css application/x-javascript application/javascript
AddOutputFilterByType DEFLATE application/rdf+xml application/rss+xml application/atom+xml application/x-font-ttf application/x-font-otf font/truetype font/opentype

NGINX

For NGINX, here is a similar example:

# gzip compression
gzip on;
gzip_static on;
gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml;

Here’s a handy tool for checking compression.

That’s it—a few lines of code and you’re compressing a whole laundry list of file types.

IIS

On IIS, it’s even easier. You can enable either ‘static’ or ‘dynamic’ compression in Internet Information Server by checking a box and/or editing a configuration file. Every version of IIS seems to have a different though easy way to do it. Check your favorite search engine.

Move the Database

An easy advanced upgrade you can make: Move your database to a separate server optimized for database software.

Database access and web page delivery require lots of resources, and they both have to share those resources. A database may suck up CPU cycles that the web server needs during high traffic, while the web server may eat up memory the database needs. Both can eat up disk space.

Putting the database on one server and the web server software on another gives each its own dedicated set of resources. You can also give the database access to a faster CPU or more RAM, which it needs.

Use a Content Distribution Network (CDN)

A CDN uses a distributed set of servers to deliver specific website files. A CDN usually delivers the ‘static’ files and reduces file size and speeds delivery, making better use of the pipe.

That speeds up your site in several different ways. A CDN:

  1. Delivers files from the server that’s geographically closest to the person visiting the site
  2. Compresses files using GZIP (see above)
  3. Sends cookieless files, reducing packet size
  4. Reduces requests and load on the origin server

In Summary

Many of the topics discussed in this article are advanced and require the support of server administrators and/or backend developers. Most professional hosting plans, especially enterprise level, have these best practices implemented. Think of fast server response times as foundational, as they not only directly affect two of the three core web vital metrics (FID/FCP and LCP), they are the first best practices put to work on every web site page request.

Andy Schaff

Development Architect
Development Architect

With more than a decade of experience, Andy is a highly-motivated developer who will take on any technology thrown at him. A proponent of well-formed and documented code, page speed techniques, and high attention to detail, Andy is the full-stack implementation specialist and development architect at Portent.

Start call to action

See how Portent can help you own your piece of the web.

End call to action
0

Leave a Reply

Your email address will not be published. Required fields are marked *

Close search overlay