Quick List: Things WordPress Plugins Developers Can Do to Help Their Plugins Scale

1) For heavy backend operations like generating reports, don’t generate the screen on the init hook. Schedule a cron job and generate it periodically … if the administrator needs it right away, give them a refresh button.

2) Don’t autoload options unless they are really needed on every page load. And if you do autoload the option then make sure it isn’t too big 10KB should be plenty. Keep in mind if you create your option for the first time using the update_option(); [codex] function it will be autoloaded by default. So you should instead intiate the option using add_option('myoption','myvalue', false);

3) Don’t use wp_postmeta fields for numeric calculations. WordPress strongly encourages developers to work within their custom post type API, which is good, but that doesn’t mean the Metadata API is equally good for all data types. For instance a post_type, “order”, in which the record will have fields like ‘subtotal’,’tax’,’discount’,’grand_total’, which will likely be summed and counted and multiplied etc, go ahead and use a custom table with a foreign key on post_id.

Use the WordPress Heartbeat API

A new WordPress performance bottleneck is looming:  admin-ajax.php.

More and more plugins are relying on the admin-ajax.php and the wp_ajax_ hook to deliver their content and functionality making WordPress more dynamic than ever. The problem is that if you have multiple plugins running all making their own wp_ajax calls without concern for what the others do, you end up with 4 or 5 server requests being required to render one page. This is a nightmare for server admins, especially since wp_ajax_ is often implemented as a POST request which allows it to bypass caching from cloudflare or varnish or what have you.

The solution is not to abandon wp_ajax_ but to come up with a better way to manage it. I really like what the Heartbeat API is doing. Instead of each client plugin managing it’s own request, you can tie into an existing “heartbeat” to get the info you need. I would recommend all plugin and theme developers use the Heartbeat API approach if at all possible. At least until wp_ajax_ gets  streamlined.

Should I Use WordPress Multisite? Why Not?

WordPress Multisite ( a.k.a WPMU )  has been alluring ever since it was introduced. Sweet! Now I can start my own blog network like WordPress.com, next stop global domination!

Ah, were it that easy I would be an enormous fan. But global domination doesn’t come without hard work unfortunately and if you’re not ready for that then WordPress Multisite can be a curse rather than a blessing.

First let’s establish what is awesome about WordPress Multisite and when you should use it. As a developer you always get talked into building sites for friends and family. To save myself time I have a multisite install I use to host these charity sites. It makes it super convenient for me. Less maintenance. Less setup time. Add the WordPress Domain Mapping plugin and I’m rockin’. WordPress MU is great for this.

It’s also great if you really do need a “network” of blogs. In some cases this is exactly your goal and WP Multisite is the only tool that’ll get you there. Great! Use it!

But it’s the improper use that irks me. Too often development agencies see it a as a shortcut to hosting their client sites or, worse, someone who is not a server admin thinks they will make a quick buck throwing up a multisite on someone else’s servers and selling hosting.

So before using multisite there are a few things to know about it:

  1. It can be a bitch to scale. Each blog instance on your multisite creates it’s own set of 11 tables … no big deal for 50 sites. But once you hit 100 – 200 sites you’re going to start getting some headaches on shared hosting, and just 250 – 500 sites is enough to crater an entire VPS.
  2. Maintenance Costs Grow Exponentially. The cost to maintain your first 50 WPMU sites is relatively cheap. You definitely make some money up front. But the bloating database starts getting expensive to maintain when it has to have it’s own database server, and then even more when it has to be shared across several. Moreover, development slows down since you spend more time on performance issues … that cross site feed that worked fine for 50 sites is really dragging at 250. Also, try creating a dev environment for a network, not as easy as you think.
  3. Bugs Get More Dangerous. The cost of a mistake also grows. Now you have all your customers connected to one WordPress Multisite code base and database. But your growing customer base requires you to bring on some new developers. And guess what? One of them isn’t that great and commits some bad PHP to the code base. Now all of your customers are pissed, instead of just the one he/she was working on.

WordPress MU is appealing and sexy. It’s like magic. No more updating 100 installs manually, one and done, Woot! But it’s really just deferring costs until later. If you’re prepared for them, maybe it’s still worth the trade off, but in most cases it’s not. There are plenty of other services that will manage your code upgrades for you without having to use WordPress MU and that’s really the main benefit.

Other Reading



Obfuscate Your Code at Your Own Risk

Looks like ZippyKid no longer supports WishList Member. I think it raises a good point about obfuscating code. Hiding your code behind some sort of encryption like Base64 may prevent your code from being ripped off, but it also prevents it from being improved and supported. If WLM made it’s code transparent it could share the burden of support and innovation with other services and developers. ZippyKid/WP Engine/WPMU/WP.com and on and on. But the obfuscation instead guarantees this vast community of resources can be of no benefit to WLM at all. I hope it’s worth it.

List your MySQL Tables According to Size or Row Count

If you’ve ever been debugging a large Multisite installation that’s having MySQL memory issues you know how maddening it can be. Sometimes you can turn on slow logging and find the source of the issue. But sometimes it’s not just about slow queries, it’s about the shear size of the tables and the number of queries being run against them. One trick is to generate a list of all the tables on the site and sort them by size. This will help you narrow down trouble spots.

SELECT table_name,table_rows,data_length FROM TABLES where table_schema = 'DBNAME' AND table_rows > 0 ORDER BY data_length DESC LIMIT 0,50;

This will produce a list like the following (see below). You can adjust the sort order and limit based on your needs. This query isn’t all that helpful in cases of small installs. But in an installation with 500+ tables it can help you narrow down the issue quickly. WARNING: this query can take a long time to run depending on the number of databases on your server and the number of tables in the Dbs.


Function: get_template_part_cached()

In my WordCamp Chicago presentation I had an bunk version of this function. Here’s a better version. Though we could probably still improve even this:

function get_template_part_cached($slug, $name, $key, $group = 'posts', $ttl = 3600) {
if( !$output = wp_cache_get($key, $group)) {
$output = ob_get_contents();
wp_cache_set( $key, $output, $group, $ttl );
echo $output;

Thesis Cache(r) Beta – 2012-07-31

Through my work at WPEngine I’ve found that using the Thesis Theme can occasionally lead to some performance issues on higher-traffic sites. Nothing spectacular, there are a ton of themes out there that have some issues scaling. Thesis is not special.

But what occurred to me about Thesis, is that it really should be one of the more performant themes out there.  The logic of the theme is organized hierarchically. Thesis passes all html output through thesis_html_framework() which means it is very easy to grab it and cache it, thus adding an additional layer of caching using the WordPress Object Cache.

[download link=”/downloads/thesis-cacher.zip”]

Why would you need this if you are already using W3TC or using a hosting company like WPEngine. Both give you various tools for page caching … why use another?

Well, first this plugin is not about replacing your existing caching, but instead making sure that you have adequate caching at the theme level. Even with system level caching there are going to be times where the cache is not served … either because a user is logged in, or because you are getting a lot of traffic and the cache is just missing, or because the cache was purposefully cleared.

ThesisCache(r) just gives you another layer to prevent consuming MySQL/PHP memory if you don’t have too. It gives you page-by-page,post-by-post control over whether to cache a post (or any post type). You can choose to NOT cache a page … or you can even choose to NOT cache the sidebars even though you want to cache the rest.

If you are not currently using ANY object caching no worries, ThesisCache(r) will set up a file-based object cache for you. If you are using an object cache, it will simply leverage what already use.

ThesisCache(r) isn’t in the Plugin Repo yet so here’s the download link. Please report any issues you have in the comments below. I’ll try to deal with them when I can.

[download link=”/downloads/thesis-cacher.zip”]