Tips for Hosting WordPress on Pantheon

Pantheon has long been hosting Drupal sites, and their entry into the WordPress hosting marketplace is quite welcome. For the most part, hosting WordPress sites on Pantheon is a dream for developers. Their command line tools and git-based development deployments, and automatic dev, test, live environments (with the ability to have multiple dev environments on some tiers) are powerful things. If you can justify the expense (and they’re not cheap), I would encourage you to check them out.

First, the good stuff:

Git-powered dev deployments

This is great. Just add their Git repo as a remote (you can still host your code on GitHub or Bitbucket or anywhere else you like), and deploying to dev is as simple as:

git push pantheon-dev master

Command-line deployment to test and live

Pantheon has a CLI tool called Terminus that can be used to issue commands to Pantheon (including giving you access to remote WP-CLI usage).

You can do stuff like deploy from dev to test:

terminus site deploy --site=YOURSITE --env=test --from=dev --cc

Or from test to live:

terminus site deploy --site=YOURSITE --env=live --from=test

Clear out Redis:

terminus site redis clear --site=YOURSITE --env=YOURENV

Clear out Varnish:

terminus site clear-caches --site=YOURSITE --env=YOURENV

Run WP-CLI commands:

terminus wp option get blogname --site=YOURSITE --env=YOURENV

Keep dev and test databases & uploads fresh

When you’re developing in dev or testing code in test before it goes to live, you’ll want to make sure things work with the latest live data. On Pantheon, you can just go to Workflow > Clone, and easily clone the database and uploads (called “files” on Pantheon) from live to test or dev, complete with rewriting of URLs as appropriate in the database.

No caching plugins

You can get rid of Batcache, W3 Total Cache, or WP Super Cache. You don’t need them. Pantheon caches pages outside of WordPress using Varnish. It just works (including invalidating URLs when you publish new content). But what if you want some control? Well, that’s easy. Just issue standard HTTP cache control headers, and Varnish will obey.


function my_pantheon_varnish_caching() {
	if ( is_user_logged_in() ) {
	$age = false;

	// Home page: 30 minutes
	if ( is_home() && get_query_var( 'paged' ) < 2 ) {
		$age = 30;
	// Product pages: two hours
	} elseif ( function_exists( 'is_product' ) && is_product() ) {
		$age = 120;

	if ( $age !== false ) {
		pantheon_varnish_max_age( $age );

function pantheon_varnish_max_age( $minutes ) {
	$seconds = absint( $minutes ) * 60;
	header( 'Cache-Control: public, max-age=' . $seconds );

add_action( 'template_redirect', 'my_pantheon_varnish_caching' );

And now, some unclear stuff:

Special wp-config.php setup

Some things just aren’t very clear in Pantheon’s documentation, and using Redis for object caching is one of them. You’ll have to do a bit of work to set this up. First, you’ll want to download the wp-redis plugin and put its object-cache.php file into /wp-content/.

Update: apparently this next step is not needed!

Next, modify your wp-config.php with this:

// Redis
if ( isset( $_ENV['CACHE_HOST'] ) ) {
	$GLOBALS['redis_server'] = array(
		'host' => $_ENV['CACHE_HOST'],
		'port' => $_ENV['CACHE_PORT'],
		'auth' => $_ENV['CACHE_PASSWORD'],

Boom. Now Redis is now automatically configured on all your environments!

Setting home and siteurl based on the HTTP Host header is also a nice trick for getting all your environments to play, but beware yes-www and no-www issues. So as to not break WordPress’ redirection between those variants, you should massage the Host to not be solidified as the one you don’t want:

// For non-www domains, remove leading www
$site_server = preg_replace( '#^www\.#', '', $_SERVER['HTTP_HOST'] );

// You're on your own for the yes-www version :-)

// Set URLs
define( 'WP_HOME', 'http://'. $site_server );
define( 'WP_SITEURL', 'http://'. $site_server );

So, those environment variables are pretty cool, huh? There are more:

// Database
define( 'DB_NAME', $_ENV['DB_NAME'] );
define( 'DB_USER', $_ENV['DB_USER'] );
define( 'DB_PASSWORD', $_ENV['DB_PASSWORD'] );
define( 'DB_HOST', $_ENV['DB_HOST'] . ':' . $_ENV['DB_PORT'] );

// Keys
define( 'AUTH_KEY', $_ENV['AUTH_KEY'] );
define( 'LOGGED_IN_KEY', $_ENV['LOGGED_IN_KEY'] );
define( 'NONCE_KEY', $_ENV['NONCE_KEY'] );

// Salts
define( 'AUTH_SALT', $_ENV['AUTH_SALT'] );
define( 'NONCE_SALT', $_ENV['NONCE_SALT'] );

That’s right — you don’t need to hardcode those values into your wp-config. Let Pantheon fill them in (appropriate for each environment) for you!

And now, some gotchas:

Lots of uploads = lots of problems

Pantheon has a distributed filesystem. This makes it trivial for them to scale your site up by adding more Linux containers. But their filesystem does not like directories with a lot of files. So, let’s consider the WordPress uploads folder. Usually this is partitioned by month. On Pantheon, if you start approaching 10,000 files in a directory, you’re going to have problems. Keep in mind that crops count towards this limit. So one upload with 9 crops is 10 files. 1000 uploads like that in a month and you’re in trouble. I would recommend splitting uploads by day instead, so the Pantheon filesystem isn’t strained. A plugin like this can help you do that.

Sometimes notices cause segfaults

I honestly don’t know what is going on here, but I’ve seen E_NOTICE errors cause PHP segfaults. Being segfaults, they produce no useful information in logs, and I’ve had to spend hours tracking down the code causing the issue. This happens reliably for given code paths, but I don’t have a reproducible example. It’s just weird. I have a ticket open with Pantheon about this. It’s something in their custom error handling. Until they get this fixed, I suggest doing something like this, in the first line of wp-config.php:

// Disable Pantheon's error handler, which causes segfaults
function disable_pantheon_error_handler() {
	// Does nothing

if ( isset( $_ENV['PANTHEON_ENVIRONMENT'] ) ) {
	set_error_handler( 'disable_pantheon_error_handler' );

This just sets a low level error handler that stops errors from bubbling up to PHP core, where the trouble likely lies. You can still use something like Debug Bar to show errors, or you could modify that blank error handler to write out to an error log file.

Have your own tips?

Do you have any tips for hosting WordPress on Pantheon? Let me know in the comments!

Introducing Cache Buddy: a companion for your WordPress page caching solution

WordPress is, by default, completely dynamic. On every page load, a bunch of “work” happens. Cookies are read. A database is queried. Content is transformed. All of this makes WordPress very powerful and flexible. But for sites that get a lot of traffic and mostly just need to crank out the same pages for everyone, this dynamic nature can become a challenge.

The common solution to this is to layer a page cache on top of WordPress. Batcache, W3 Total Cache, and WP Super Cache are examples of page caches built as WordPress plugins. Varnish, Nginx fastcgi caching, and CDNs like Akamai or Cloudflare are examples of page caching that happens outside of the WordPress layer. They store the HTML that WordPress generates for a given URL and then store it for later, so that when people request that URL in the future, they can just get the cached version, for little or no work on WordPress’ part.

But these page caching solutions all share the same downside: they can’t cache pages for logged-in WordPress users or users with WordPress comment cookies. Why not? Well, because WordPress looks at these cookies and alters the page based on them. A logged in user will see the WordPress toolbar at the top, which is customized to them. Users with more privileges might see “edit” links next to content that they can edit. And returning commenters will see their name, e-mail, and URL helpfully filled in to comment forms. All these things change the output of the page, such that it wouldn’t be worth it for a page cache to hold on to that page — it would only be of use to the individual visitor who triggered it. So all of these page caching solutions have rules that make them “skip” the page cache if a user has a WordPress comment cookie, or a WordPress user account cookie (and also a post password cookie, though this is an infrequently used feature). If a site has an active commenting community or has open registration (or required registration), this means that a much smaller percentage of page views can be cache hits. Instead, they are the dreaded cache miss, and they fall back to having WordPress generate a dynamic page.

The difference between a cache miss and a cache hit is not small. A cache hit takes minimal effort for the server, and can be delivered to the user much faster. It can be the difference between 1 second and 0.002 seconds. Five hundred times slower. Dynamic views keep the server connection open for longer, and take up CPU cycles. This can snowball under heavy load. Pages start taking longer, and because they start taking longer, less CPU is available. Eventually they can time out, or the server can run out of connections. Not good. You want cache hits, during a situation like this, but if the traffic isn’t anonymous (non-comment-cookie, non-logged-in-cookie), the available caching solutions just give up.

I’ve been solving this issue for years with custom caching solutions that strip the customizations from the page, so that the cache can be configured to serve one static page to everyone. Now, I’ve moved these techniques into a plugin, and I’m calling it Cache Buddy.

Cache Buddy works by doing the following:

  1. Changes what paths logged-in cookies are set for (so they work in the WordPress backend, but don’t exist on the front of the site).
  2. Sets custom cookies with relevant information about the logged-in user, on the front of the site, making these cookies JavaScript-readable.
  3. Sets custom cookies for commenters (again, JavaScript-readable), and doesn’t set the normal WordPress comment cookies.
  4. Uses the information from these JavaScript cookies, plus some comment form magic, to recreate the comment form experience users would get from a dynamic page.

This means that you can log in to WordPress, and then go a view a post’s comment form, and see “You are logged in as Mark. Log out?”. Or you can be a non-account-having commenter who has commented, and your information will be filled in. Or maybe the site requires registration, and you’re not signed in. You’ll see the normal prompt to sign in. But here’s the kicker: all of these pages are the same page, and will be cached by page caching solutions. The customizations are all done in JavaScript, using the custom (and unknown to WordPress-optimized page caches) cookies that Cache Buddy sets.

What about the toolbar?

Well, by default, Subscriber and Contributor users won’t see it. But it honestly isn’t very useful to them anyway. But Authors, Editors and Administrators (who should be a very small percentage of viewers) will still get dynamic page views like they do now, and they’ll see the toolbar.

What about BuddyPress?

Good luck. Some plugins customize the page so much that all views really do need to be dynamic. Object Caching is your friend, for these cases.

Is this for every site?

No. If you have a BuddyPress site or an e-commerce site, you may honestly need WordPress logged-in cookies available on the front of your site. But if you’re just running a blog/CMS site with a significant number of commenters and logged-in Subscribers, this plugin could massively speed up your site, because requests that had to always be dynamic before, can now be served from a page cache.

What about the “Meta” widget?

Not currently supported, but I’m hoping to add support for it.

What about other logged-in site customizations?

The user will appear to by an anonymous visitor. But you could recreate them in JS by reading the cookies that Cache Buddy sets.

Ask Mark Anything

People ask me a lot of questions. About WordPress and web development for sure, but also about other topics. I’ve decided to try a little experiment: a public way to ask me questions. Zach Holman from GitHub had the idea to use a GitHub issue tracker for this very purpose, and I think it looks like a splendid idea.


  • Allows for more in-depth discussions than Twitter (but you can still talk to me on Twitter for quick questions).
  • Is public (as opposed to e-mail).
  • Forces me to deal with questions.

Now, note that this doesn’t mean I want you to treat me like your personal Google-searcher or WordPress code grepper! But if you think there is a WordPress (or other) topic that I am uniquely qualified to address, just ask.

Don’t use template_redirect to load an alternative template file

template_redirect is a popular WordPress hook, for good reason. When it runs, WordPress has made its main query. All objects have been instantiated, but no output has been sent to the browser. It is your last stop to hook in and redirect the user somewhere else, and the best place to do so if you need full knowledge of the queried objects. But what it is not good for is loading an alternative template.

I see code like this a lot:

add_action( 'template_redirect', 'my_callback' );

function my_callback() {
  if ( some_condition() ) {
    include( SOME_PATH . '/some-custom-file.php' );

The problem with this code is that anything hooked in to template_redirect after this code isn’t going to run! This can break sites and lead to very odd bugs. If you want to load an alternative template, there’s a filter hook for that: template_include.

add_filter( 'template_include', 'my_callback' );

function my_callback( $original_template ) {
  if ( some_condition() ) {
    return SOME_PATH . '/some-custom-file.php';
  } else {
    return $original_template;

Same effect, but doesn’t interfere with other plugin or theme code! This distinction should be easy to remember:

  • template_redirect is for redirects.
  • template_include is for includes.

Six Apart Suspends Movable Type Open Source Project

Six Apart announced that they are suspending the free and open source version of Movable Type. Here’s what I had to say about them revealing the free and open source version of Movable Type, back in 2007.

Note that this also allows Six Apart at any time in the future to say “As of today, we are no longer releasing a GPL version of Movable Type.” And that would require that someone fork the code in order to proceed with development. WordPress can’t easily do that, as it is not owned by a single legal entity.

What a GPL’d Movable Type means for WordPress

When I wrote that, I honestly didn’t think it was going to happen. I was just saying that it was an option that was open to them. But here they’ve gone and done it. What a bizarre saga this has turned out to be. “Life is funny”, remarks Anil Dash, former Six Apart Chief Evangelist.

Will this start a conversation about copyright assignment on open source projects, or is this a non-event? If a project has a strong open source development community, an attempt to close it down should result in a fork of the project. As it happens, Movable Type Open Source was already forked. Several prominent Movable Type people created Open Melody back in 2009. Don’t bother clicking that link. As of this writing the site is non-operational. The Open Melody Twitter account just recorded its first activity in over two years. A retweet of this:

I guess we’ll find out!

Fragment Caching in WordPress

Fragment caching is useful for caching HTML snippets that are expensive to generate and exist in multiple places on your site. It’s like full page HTML caching, but more granular, and it speeds up dynamic views.

I’ve been using this fragment caching class for a few years now. I optimized it around ease of implementation. I wanted, as much as possible, to be able to identify a slow HTML-outputting block of code, and just wrap this code around it without having to refactor anything about the code inside.

Implementation is pretty easy, and you can reference the comment at the start of the code for that. The only thing to consider is that any variables that alter the output need to be build into the key. It should also be noted that this code assumes you have a persistent object cache backend.

WordPress 3.6: shortcode_atts_{$shortcode} filter

Since WordPress 3.6 is in beta, I thought I’d use this nearly-abandoned blog (hey, been busy working on WordPress!) to talk about some of the cool stuff for developers. For the first installment, check out the new shortcode_atts_{$shortcode} filter. The shortcode_atts() function now accepts a third parameter — the name of the shortcode — which enables the running of this filter. All of the core shortcodes now pass this parameter.

This filter passes three things:

  1. $out — the output array of shortcode attributes
  2. $pairs — the array of accepted parameters and their defaults.
  3. $atts — the input array of shortcode attributes

Let’s look at what we can do with this. One thing is that you can dynamically set or override shortcode values. You an also define new ones, and transpose them into accepted ones. Let’s look at the “gallery” shortcode for example. What if there was a gallery of images that you would reuse. Instead of picking the images each time, you could have a plugin that gives that set of attachment IDs a shortcut name. Then you could do [gallery foo=my-gallery-name], which the plugin would convert to a list of IDs. Or, you could enforce a certain number of gallery columns on the fly. Let someone set it theme-wide, without having them go back and change their shortcodes.

What other uses can you think of?

Now, if you’re a plugin or theme author who provides their own shortcodes, you should immediately start providing this third parameter to your shortcode_atts() calls (since it is an extra parameter, you can do this without a WordPress version check). Maybe it’ll reduce the number of times people need to fork your code just to add an option to your shortcode!