Making some progress here, I think, though in the process I've discovered an error in the httpd logs which seems to occur when paging through images (e.g. 'next') -- 

[Fri Mar 07 13:59:09 2014] [info] [client] (32)Broken pipe: core_output_filter: writing data to the network
[Fri Mar 07 13:59:09 2014] [error] [client] mod_wsgi (pid=30389): Exception occurred processing WSGI script '/opt/chronam/conf/chronam.wsgi'.
[Fri Mar 07 13:59:09 2014] [error] [client] IOError: failed to write data

I haven't been able to  track down where it is coming from and what it is trying to write to.  There is no end user indication of this problem.  Any idea what this is from?

On Mar 7, 2014, at 10:32 AM, Brunton, David wrote:

There are a few scenarios where this might happen. If your database isn’t configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You’ll know because your slow query log will get filled up fast J If you’re not using a slow query log, that is a good place to start.
This can also happen if Solr doesn’t have enough memory. That configuration is a little bit weird, because it’s in the Jetty startup file, on Jetty launch:
Specifically, the JAVA_OPTIONS line. –Xms and –Xmx are both set to 2g by default, I think, but if you either don’t have that much memory or set it substantially lower than that, Solr can start paging memory.

Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we’ve done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache’s mod_cache. Any of them should help take the load off of your web server a bit.
Do any of these ring a bell?
From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato
Sent: Friday, March 07, 2014 12:44 PM
To: [log in to unmask]
Subject: Re: High CPU usage
I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each)  are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.)   We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. 
I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements.  


On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote:
Hi everyone,

I'm running Chronam at  So far so good. However, we're seeing a
pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and
16 GB of ram.  Now that the site is production we're seeing 85% CPU utilization all the time. The load
average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a
day with about 35,000 page views. We have about 2.5 million pages at the moment.

Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to
800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the
system side I've already unloaded unused apache2 mods, switched from mpm_prefork to
mpm_worker and tweaked those associated settings. It's helped a bit.

So at this point I assume that Chronam is utilizing that memory and cpu.  My questions are this:

Is this common for Chronam? Is this what others are seeing?

If not, is there something I can start looking at that might help reduce the CPU usage?

I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to
make the application as slim as possible before going that route.

Thanks in advance for any assistance you can provide!

-Chuck Henry