60Web60

Web60 Features

What Actually Makes a WordPress Site Fast: The Performance Stack Behind Every Page Load

Ian O'Reilly··14 min read
Abstract layered architecture illustration with teal translucent shapes stacked on warm grey background

Most WordPress performance problems have nothing to do with your theme, your plugins, or how many images you uploaded last Tuesday. They start deeper. At the server level, where every page request either gets handled efficiently or queues up behind a dozen unnecessary database calls.

I spend my days monitoring server performance across our hosting infrastructure. The pattern is consistent: a business owner installs a caching plugin, runs a speed test, sees a marginal improvement, and assumes that is as fast as WordPress gets. It is not. Not even close.

The difference between a WordPress site that loads in 4 seconds and one that loads in under 1 second is rarely a plugin. It is the hosting stack underneath. Nginx, PHP-FPM, FastCGI page caching, Redis object caching. These four components, properly configured, handle the heavy lifting that no frontend plugin can replicate. This article explains each layer, what it does, why it matters, and what happens to your site when it is missing.

The Problem With Plugin-Only Performance

WordPress, by default, is not fast. Every page request triggers PHP execution, which queries a MySQL database, assembles HTML, and sends it back to the browser. On a busy WooCommerce site, a single page load might generate 200 or more database queries. Multiply that by 50 concurrent visitors and the database becomes the bottleneck.

The instinct is to install a caching plugin. Fair enough. Plugins like WP Super Cache or W3 Total Cache generate static HTML files and serve those instead of running PHP every time. That helps. But these plugins operate within WordPress itself. They still rely on WordPress loading, initialising, and deciding whether to serve a cached file or generate a fresh one.

That initialisation step alone can add 200 to 400 milliseconds to every request. Server-level caching operates below WordPress entirely. It intercepts the request before PHP even starts. The difference is not between a fast process and a slightly faster process. It is between running the process and skipping it altogether.

Three Layers of Caching (and Why You Need All of Them)

A properly built WordPress hosting stack uses three distinct caching layers. Each handles a different type of request, and each addresses a different bottleneck.

LayerTechnologyWhat It CachesWho Benefits
Page cacheNginx FastCGIFull HTML pagesAnonymous visitors, search engine crawlers
Object cacheRedisDatabase query resultsLogged-in users, WooCommerce shoppers, dynamic pages
Opcode cacheOPcacheCompiled PHP bytecodeEvery request that touches PHP

Miss one layer and you have a gap. A site with FastCGI page caching but no Redis will serve anonymous visitors quickly but grind to a halt when 30 people log in simultaneously. A site with Redis but no page caching still runs PHP for every anonymous visit, burning CPU cycles unnecessarily.

The three layers are not interchangeable. They solve different problems at different points in the request lifecycle.

FastCGI Page Caching: Bypassing PHP Entirely

Nginx FastCGI caching is the single most impactful performance optimisation available to a WordPress site. When a visitor requests a page, Nginx checks whether it already has a cached copy of that page's HTML. If it does, it serves that HTML directly. PHP never starts. MySQL never gets queried. The response goes from server to browser in single-digit milliseconds.

Independent benchmarking by WPX.si found that FastCGI caching reduced average server response time from roughly 200 milliseconds to around 9 milliseconds [1]. That is a single-environment benchmark and your results will vary with server hardware, plugin load, and traffic patterns, but the order of magnitude is consistent with what we observe across our own infrastructure. The improvement is not incremental. It is the difference between PHP executing on every request and skipping PHP entirely.

For a Waterford manufacturer running a trade catalogue with 500 product pages, the difference is stark. Without FastCGI caching, each of those product pages generates fresh database queries every time a potential buyer visits. With it, Nginx hands over a pre-built HTML page before PHP has time to wake up. The customer sees the product. The server barely notices the traffic.

The alternative reality: without server-level page caching, your WordPress site rebuilds every page from scratch for every visitor. During a busy period, 100 visitors browsing your catalogue simultaneously means 100 separate PHP processes, 100 sets of database queries, all competing for the same server resources. By the time the 80th visitor's page finishes loading, the first 30 have already given up and left.

When FastCGI Cannot Help

FastCGI caching works brilliantly for pages that look the same to every visitor. Product listings, blog posts, service pages, contact information. It cannot cache pages that are personalised: shopping carts, account dashboards, checkout flows. These pages must be generated fresh because they contain user-specific data.

This is where the next layer becomes critical.

Redis Object Caching: Speed for the Uncacheable

Redis is an in-memory data store. Instead of querying MySQL every time WordPress needs a piece of data, Redis stores frequently requested query results in RAM. Memory access is orders of magnitude faster than disk-based database reads.

For a WordPress site, this means the database queries that power dynamic pages (WooCommerce product lookups, user session data, menu structures, widget content) get served from memory instead of disk. According to testing reported by Pressidium and other managed hosting providers, Redis object caching can reduce database queries by somewhere between 50% and 80% during normal traffic, with the higher end of that range showing up on sites with heavier dynamic content [2].

The real value of Redis shows during traffic spikes. When 200 visitors hit your site simultaneously, Redis acts as a buffer between those visitors and your database. Rather than each visitor triggering fresh queries, Redis serves cached results from memory. The database stays calm. The site stays fast.

The sync reality check: Redis caches query results with a time-to-live value. When you update a product price or publish a new blog post, there is a brief window, typically a few seconds, where Redis may serve the old data. Most setups handle this through intelligent cache invalidation, purging relevant keys when content changes. On rare occasions, a logged-in customer might see a stale product page for a few seconds after an update. A manual cache flush resolves it instantly. Worth knowing, even if it rarely causes real problems.

Abstract illustration of data flowing through layered memory caches with teal connecting lines
Three caching layers work together to handle different types of WordPress requests

OPcache: Compiled PHP Bytecode Caching

The third caching layer operates inside PHP itself. Every time PHP executes a WordPress file, it parses the source code, compiles it into bytecode, and then runs that bytecode. On a typical WordPress installation with 30 plugins, that means parsing and compiling thousands of PHP files on every uncached request.

OPcache eliminates the parse-and-compile step by storing the compiled bytecode in shared memory. The first request compiles each file normally. Every subsequent request reads the pre-compiled bytecode directly from RAM. For a WordPress site loading hundreds of PHP files per request, this removes a significant chunk of CPU overhead.

The performance gain is not as dramatic as FastCGI or Redis because OPcache only helps requests that actually reach PHP. If FastCGI serves a cached page, PHP never runs and OPcache is irrelevant for that request. But for every dynamic page load, admin dashboard interaction, WooCommerce checkout, or AJAX call, OPcache shaves milliseconds off each PHP file inclusion. Across hundreds of includes per request, that adds up to meaningful savings. On a properly tuned server, OPcache is enabled by default and requires no WordPress plugin. It runs silently at the PHP level, benefiting every PHP application on the server.

The Server Foundation: Nginx and PHP-FPM

Caching layers do the heavy lifting, but they sit on top of a web server and PHP processor. The choice of server software matters more than most people realise.

Nginx is an event-driven web server. It handles thousands of concurrent connections using a small, fixed number of worker processes. Each worker manages hundreds of connections simultaneously without spawning new processes. This architecture makes Nginx exceptionally efficient under load.

Compare that with Apache's traditional model, where each connection gets its own process or thread. Under heavy traffic, Apache spawns dozens or hundreds of processes, each consuming memory. A WordPress site on Apache with 200 concurrent visitors might use 2 to 4 GB of RAM just for web server processes. The same traffic on Nginx uses a fraction of that.

PHP-FPM (FastCGI Process Manager) sits between Nginx and WordPress. It manages a pool of PHP worker processes, ready to handle requests. When Nginx needs to execute PHP because the page is not in the FastCGI cache, it hands the request to PHP-FPM, which assigns it to an available worker.

The key advantage of PHP-FPM is process management. It can dynamically scale the number of PHP workers based on demand, start new ones when traffic increases, and terminate idle ones when traffic drops. This prevents the server from running out of memory during spikes while keeping resources available during quiet periods.

Together, Nginx and PHP-FPM form the foundation that makes FastCGI and Redis caching possible. Without them, you are running a caching layer on top of an inefficient base. Optimising the wrong end of the problem.

Core Web Vitals: What Your Hosting Stack Controls

Google's Core Web Vitals measure three aspects of page experience: Largest Contentful Paint (LCP), Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). As Google's own documentation confirms, these metrics remain a ranking signal, and pages with good scores have an advantage when content quality is otherwise equal [3].

Here is what your hosting stack directly controls.

LCP (target: under 2.5 seconds) measures how quickly the largest visible element loads. Server response time, your TTFB, is the starting point for LCP. Every 100 milliseconds of TTFB improvement translates to roughly 100 milliseconds of LCP improvement. A hosting stack that delivers 50ms TTFB gives your page a 450ms head start over one delivering 500ms TTFB.

INP (target: under 200 milliseconds) measures responsiveness to user interactions. While largely influenced by frontend JavaScript, a slow server compounds the problem. If initial page resources load slowly, the browser is still parsing and executing scripts when the user tries to interact.

CLS (target: under 0.1) measures visual stability. This is primarily a frontend concern, but server-side rendering speed affects it indirectly. Faster server responses mean assets load in a more predictable order, reducing the layout shifts that frustrate users.

According to the Chrome User Experience Report (CrUX) data published via HTTP Archive, only around 50% of mobile origins pass all three Core Web Vitals thresholds [4]. More than half the web is failing on mobile. For businesses that depend on mobile traffic, a poor hosting stack is a competitive disadvantage before you even consider content quality. You can see the full picture in our breakdown of Core Web Vitals failure rates for Irish WordPress sites.

Abstract gauge shapes with teal accents suggesting performance measurement on warm grey background
Core Web Vitals scores start with server response time, not frontend optimisation

What This Looks Like at Web60

I will be honest about a mistake I made early in my operations career. I focused almost exclusively on uptime monitoring and neglected granular TTFB tracking across our infrastructure. Sites were staying online, passing basic health checks, but some were responding slowly because object caching had silently failed after a PHP update. The sites were not down. They were sluggish. It took customer complaints about slow WooCommerce searches to surface the issue. We now monitor TTFB per site alongside uptime, and we alert on response time degradation before it becomes visible to users.

That experience reinforced what the data consistently shows: performance is not a single metric. It is a stack of components, each with its own failure modes, each requiring its own monitoring.

Web60's Irish-hosted infrastructure stack runs every WordPress site on the full WordOps configuration: Nginx for request handling, PHP-FPM for process management, FastCGI for page caching, and Redis for object caching. Every site gets these by default at EUR60 per year. There is no performance tier, no add-on pricing, no checkbox to enable caching. The stack is the same whether you are running a five-page brochure site or a WooCommerce catalogue with thousands of products.

This is what a complete WordPress performance approach looks like at the infrastructure level. The caching layers are not optional extras. They are the foundation.

Where Managed Hosting Is Not the Answer

If you are running large-scale WordPress deployments with a dedicated operations team, custom Nginx configurations, and the expertise to tune PHP-FPM pool sizes, OPcache settings, and Redis eviction policies yourself, managed hosting may feel restrictive. At that scale, a bare cloud VPS on infrastructure you fully control genuinely gives you more flexibility. But that level of operational capability requires ongoing investment in staff, monitoring, and incident response that most businesses cannot justify. For the vast majority of independent retailers, agencies, and local firms, the performance stack should be someone else's operational responsibility.

Plugin Caching vs Stack Caching: A Direct Comparison

AspectPlugin-only cachingServer-stack caching
Page cacheWordPress generates, then saves static fileNginx serves cached HTML before PHP starts
Object cacheLimited or none (depends on plugin)Redis serves queries from memory
TTFB improvementModerate (200-500ms typical)Significant (under 50ms for cached pages)
Traffic spike handlingStruggles above 100 concurrent usersHandles hundreds of concurrent users efficiently
ConfigurationUser responsibility, plugin settingsPre-configured at server level

The difference is not incremental. It is architectural. A plugin-based approach optimises within WordPress. A stack-based approach optimises around WordPress, reducing what WordPress has to do in the first place.

Conclusion

WordPress performance is a stack problem, not a plugin problem. FastCGI page caching eliminates PHP execution for anonymous visitors. Redis object caching speeds up dynamic pages for logged-in users. Nginx and PHP-FPM provide the efficient foundation underneath both.

Remove any layer and you create a bottleneck that no amount of frontend optimisation can fully compensate for. The sites that consistently pass Core Web Vitals and handle traffic spikes without degradation are the ones running the full stack. The question worth asking is whether that stack is already built into your hosting, or whether you are trying to bolt it on after the fact.

Frequently Asked Questions

Does Redis object caching help if my site does not have many logged-in users?

Yes, but the benefit is smaller. Redis caches database queries that WordPress makes on every page load, including menu structures, widget data, and option values. Even for sites with mostly anonymous traffic, Redis reduces database load. The biggest gains show on sites with WooCommerce, membership areas, or forums where dynamic content is the norm.

Can FastCGI page caching cause problems with WooCommerce?

Properly configured, no. FastCGI caching should exclude dynamic pages like carts, checkouts, and account areas. These pages are served fresh through PHP and Redis. The risk comes from misconfiguration, caching a page that should not be cached. On a managed stack, these exclusions are configured at the server level so you do not have to manage them yourself.

Will a caching plugin conflict with server-level caching?

It can. Running a plugin-based page cache alongside Nginx FastCGI caching creates redundant layers that may serve stale content or interfere with cache invalidation. If your hosting stack includes FastCGI and Redis, you typically do not need a separate caching plugin. Some hosts recommend a lightweight companion plugin for cache purging, but the heavy lifting belongs at the server level.

How do I know if my hosting stack includes Redis and FastCGI?

Check with your hosting provider. Many shared hosting plans do not include Redis at all. FastCGI caching requires Nginx, which rules out Apache-only hosts. If your provider cannot tell you which caching layers are active on your site, that is a signal worth paying attention to.

Does the performance stack affect my site's security?

Indirectly, yes. A well-configured Nginx setup includes security headers, rate limiting, and protection against common attack patterns. PHP-FPM process isolation means one compromised site cannot easily affect others on the same server. The performance stack and the security posture of your hosting are closely linked at the infrastructure level.

Sources

IO
Ian O'ReillyOperations Director, Web60

Ian oversees Web60's hosting infrastructure and operations. Responsible for the uptime, security, and performance of every site on the platform, he writes about the operational reality of keeping Irish business websites fast, secure, and online around the clock.

More by Ian O'Reilly

Ready to get your business online?

Describe your business. AI builds your website in 60 seconds.

Build My Website Free →