The "other" template (e.g. the one we use in production at the Library of Congress) is here: https://github.com/LibraryOfCongress/chronam/tree/master/loc And the tabs, in particular, are in this file: https://github.com/LibraryOfCongress/chronam/blob/master/loc/templates/includes/tabs.html Be forewarned that we haven't spent much effort separating these out from the way we use them on the public website, so your mileage may vary :) Hope that helps! David. -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato Sent: Friday, March 07, 2014 12:58 PM To: [log in to unmask] Subject: Templates - search pages / advanced search tabs Is there a template available that includes the Search Pages / Advanced Search tabs? I'm not finding one in the default distribution, but do see it implemented on a few sites using chronam (e.g. http://1.usa.gov/1ea5FIh ). Before I try to reinvent the wheel I thought I should ask if this template is around somewhere! Thanks. ========================================================================Date: Fri, 7 Mar 2014 13:32:15 -0500 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: "Brunton, David" <[log in to unmask]> Subject: Re: High CPU usage In-Reply-To: <[log in to unmask]> Content-Type: multipart/alternative; boundary="_000_7CC4AB09C979C242B5E468C64182DD44D531F3FDLCXCLMB01LCDSLO_" MIME-Version: 1.0 --_000_7CC4AB09C979C242B5E468C64182DD44D531F3FDLCXCLMB01LCDSLO_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable There are a few scenarios where this might happen. If your database isn't configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You'll know because your slow query log will get filled up fast :) If you're not using a slow query log, that is a good place to start. This can also happen if Solr doesn't have enough memory. That configuration is a little bit weird, because it's in the Jetty startup file, on Jetty launch: https://github.com/LibraryOfCongress/chronam/blob/master/conf/jetty-redhat Specifically, the JAVA_OPTIONS line. -Xms and -Xmx are both set to 2g by default, I think, but if you either don't have that much memory or set it substantially lower than that, Solr can start paging memory. Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we've done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache's mod_cache. Any of them should help take the load off of your web server a bit. Do any of these ring a bell? From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato Sent: Friday, March 07, 2014 12:44 PM To: [log in to unmask] Subject: Re: High CPU usage I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each) are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.) We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements. On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote: Hi everyone, I'm running Chronam at http://nyshistoricnewspaper.org. So far so good. However, we're seeing a pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and 16 GB of ram. Now that the site is production we're seeing 85% CPU utilization all the time. The load average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a day with about 35,000 page views. We have about 2.5 million pages at the moment. Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to 800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the system side I've already unloaded unused apache2 mods, switched from mpm_prefork to mpm_worker and tweaked those associated settings. It's helped a bit. So at this point I assume that Chronam is utilizing that memory and cpu. My questions are this: Is this common for Chronam? Is this what others are seeing? If not, is there something I can start looking at that might help reduce the CPU usage? I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to make the application as slim as possible before going that route. Thanks in advance for any assistance you can provide! -Chuck Henry --_000_7CC4AB09C979C242B5E468C64182DD44D531F3FDLCXCLMB01LCDSLO_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

There are a few scenarios where this might happen. If your database isn’t configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You’ll know because your slow query log will get filled up fast J If you’re not using a slow query log, that is a good place to start.

 

This can also happen if Solr doesn’t have enough memory. That configuration is a little bit weird, because it’s in the Jetty startup file, on Jetty launch:

 

https://github.com/LibraryOfCongress/chronam/blob/master/conf/jetty-redhat

 

Specifically, the JAVA_OPTIONS line. –Xms and –Xmx are both set to 2g by default, I think, but if you either don’t have that much memory or set it substantially lower than that, Solr can start paging memory.


Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we’ve done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache’s mod_cache. Any of them should help take the load off of your web server a bit.

 

Do any of these ring a bell?

 

From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato
Sent: Friday, March 07, 2014 12:44 PM
To: [log in to unmask]
Subject: Re: High CPU usage

 

I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each)  are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.)   We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. 

 

I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements.  

 

On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote:

Hi everyone,

I'm running Chronam at http://nyshistoricnewspaper.org.  So far so good. However, we're seeing a
pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and
16 GB of ram.  Now that the site is production we're seeing 85% CPU utilization all the time. The load
average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a
day with about 35,000 page views. We have about 2.5 million pages at the moment.

Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to
800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the
system side I've already unloaded unused apache2 mods, switched from mpm_prefork to
mpm_worker and tweaked those associated settings. It's helped a bit.

So at this point I assume that Chronam is utilizing that memory and cpu.  My questions are this:

Is this common for Chronam? Is this what others are seeing?

If not, is there something I can start looking at that might help reduce the CPU usage?

I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to
make the application as slim as possible before going that route.

Thanks in advance for any assistance you can provide!

-Chuck Henry

 

--_000_7CC4AB09C979C242B5E468C64182DD44D531F3FDLCXCLMB01LCDSLO_-- ========================================================================Date: Fri, 7 Mar 2014 11:44:06 -0800 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Sara Amato <[log in to unmask]> Subject: Re: High CPU usage In-Reply-To: <[log in to unmask]> Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: multipart/alternative; boundary="Apple-Mail=_C1F7E758-FDF5-442D-A924-4BCEA7EBB720" --Apple-Mail=_C1F7E758-FDF5-442D-A924-4BCEA7EBB720 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Thanks for the speedy response (and the template info.) I'm poking around at this now - hopefully it will yield some good results (my suspicion is that the cache will be the answer ….) I wasn't aware of the slow query log, so that's a big help to know about. On Mar 7, 2014, at 10:32 AM, Brunton, David wrote: > There are a few scenarios where this might happen. If your database isn’t configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You’ll know because your slow query log will get filled up fast J If you’re not using a slow query log, that is a good place to start. > > This can also happen if Solr doesn’t have enough memory. That configuration is a little bit weird, because it’s in the Jetty startup file, on Jetty launch: > > https://github.com/LibraryOfCongress/chronam/blob/master/conf/jetty-redhat > > Specifically, the JAVA_OPTIONS line. –Xms and –Xmx are both set to 2g by default, I think, but if you either don’t have that much memory or set it substantially lower than that, Solr can start paging memory. > > Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we’ve done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache’s mod_cache. Any of them should help take the load off of your web server a bit. > > Do any of these ring a bell? > > From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato > Sent: Friday, March 07, 2014 12:44 PM > To: [log in to unmask] > Subject: Re: High CPU usage > > I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each) are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.) We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. > > I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements. > > > On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote: > Hi everyone, > > I'm running Chronam at http://nyshistoricnewspaper.org. So far so good. However, we're seeing a > pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and > 16 GB of ram. Now that the site is production we're seeing 85% CPU utilization all the time. The load > average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a > day with about 35,000 page views. We have about 2.5 million pages at the moment. > > Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to > 800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the > system side I've already unloaded unused apache2 mods, switched from mpm_prefork to > mpm_worker and tweaked those associated settings. It's helped a bit. > > So at this point I assume that Chronam is utilizing that memory and cpu. My questions are this: > > Is this common for Chronam? Is this what others are seeing? > > If not, is there something I can start looking at that might help reduce the CPU usage? > > I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to > make the application as slim as possible before going that route. > > Thanks in advance for any assistance you can provide! > > -Chuck Henry > --Apple-Mail=_C1F7E758-FDF5-442D-A924-4BCEA7EBB720 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Thanks for the speedy response (and the template info.)   I'm poking around at this now - hopefully it will yield some good results  (my suspicion is that the cache will be the answer ….)    I wasn't aware of the slow query log, so that's a big help to know about. 


On Mar 7, 2014, at 10:32 AM, Brunton, David wrote:

There are a few scenarios where this might happen. If your database isn’t configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You’ll know because your slow query log will get filled up fast J If you’re not using a slow query log, that is a good place to start.
 
This can also happen if Solr doesn’t have enough memory. That configuration is a little bit weird, because it’s in the Jetty startup file, on Jetty launch:
 
 
Specifically, the JAVA_OPTIONS line. –Xms and –Xmx are both set to 2g by default, I think, but if you either don’t have that much memory or set it substantially lower than that, Solr can start paging memory.

Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we’ve done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache’s mod_cache. Any of them should help take the load off of your web server a bit.
 
Do any of these ring a bell?
 
From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato
Sent: Friday, March 07, 2014 12:44 PM
To: [log in to unmask]
Subject: Re: High CPU usage
 
I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each)  are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.)   We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. 
 
I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements.  

 

On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote:
Hi everyone,

I'm running Chronam at http://nyshistoricnewspaper.org.  So far so good. However, we're seeing a
pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and
16 GB of ram.  Now that the site is production we're seeing 85% CPU utilization all the time. The load
average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a
day with about 35,000 page views. We have about 2.5 million pages at the moment.

Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to
800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the
system side I've already unloaded unused apache2 mods, switched from mpm_prefork to
mpm_worker and tweaked those associated settings. It's helped a bit.

So at this point I assume that Chronam is utilizing that memory and cpu.  My questions are this:

Is this common for Chronam? Is this what others are seeing?

If not, is there something I can start looking at that might help reduce the CPU usage?

I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to
make the application as slim as possible before going that route.

Thanks in advance for any assistance you can provide!

-Chuck Henry
 

--Apple-Mail=_C1F7E758-FDF5-442D-A924-4BCEA7EBB720-- ========================================================================Date: Fri, 7 Mar 2014 14:08:57 -0800 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Sara Amato <[log in to unmask]> Subject: Re: High CPU usage In-Reply-To: <[log in to unmask]> Mime-Version: 1.0 (Apple Message framework v1283) Content-Type: multipart/alternative; boundary="Apple-Mail=_C6711277-BF54-405F-918F-DD0202A61648" --Apple-Mail=_C6711277-BF54-405F-918F-DD0202A61648 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Making some progress here, I think, though in the process I've discovered an error in the httpd logs which seems to occur when paging through images (e.g. 'next') -- [Fri Mar 07 13:59:09 2014] [info] [client 158.104.5.48] (32)Broken pipe: core_output_filter: writing data to the network [Fri Mar 07 13:59:09 2014] [error] [client 158.104.5.48] mod_wsgi (pid=30389): Exception occurred processing WSGI script '/opt/chronam/conf/chronam.wsgi'. [Fri Mar 07 13:59:09 2014] [error] [client 158.104.5.48] IOError: failed to write data I haven't been able to track down where it is coming from and what it is trying to write to. There is no end user indication of this problem. Any idea what this is from? On Mar 7, 2014, at 10:32 AM, Brunton, David wrote: > There are a few scenarios where this might happen. If your database isn’t configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You’ll know because your slow query log will get filled up fast J If you’re not using a slow query log, that is a good place to start. > > This can also happen if Solr doesn’t have enough memory. That configuration is a little bit weird, because it’s in the Jetty startup file, on Jetty launch: > > https://github.com/LibraryOfCongress/chronam/blob/master/conf/jetty-redhat > > Specifically, the JAVA_OPTIONS line. –Xms and –Xmx are both set to 2g by default, I think, but if you either don’t have that much memory or set it substantially lower than that, Solr can start paging memory. > > Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we’ve done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache’s mod_cache. Any of them should help take the load off of your web server a bit. > > Do any of these ring a bell? > > From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato > Sent: Friday, March 07, 2014 12:44 PM > To: [log in to unmask] > Subject: Re: High CPU usage > > I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each) are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.) We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. > > I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements. > > > On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote: > Hi everyone, > > I'm running Chronam at http://nyshistoricnewspaper.org. So far so good. However, we're seeing a > pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and > 16 GB of ram. Now that the site is production we're seeing 85% CPU utilization all the time. The load > average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a > day with about 35,000 page views. We have about 2.5 million pages at the moment. > > Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to > 800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the > system side I've already unloaded unused apache2 mods, switched from mpm_prefork to > mpm_worker and tweaked those associated settings. It's helped a bit. > > So at this point I assume that Chronam is utilizing that memory and cpu. My questions are this: > > Is this common for Chronam? Is this what others are seeing? > > If not, is there something I can start looking at that might help reduce the CPU usage? > > I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to > make the application as slim as possible before going that route. > > Thanks in advance for any assistance you can provide! > > -Chuck Henry > --Apple-Mail=_C6711277-BF54-405F-918F-DD0202A61648 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 Making some progress here, I think, though in the process I've discovered an error in the httpd logs which seems to occur when paging through images (e.g. 'next') -- 


[Fri Mar 07 13:59:09 2014] [info] [client 158.104.5.48] (32)Broken pipe: core_output_filter: writing data to the network
[Fri Mar 07 13:59:09 2014] [error] [client 158.104.5.48] mod_wsgi (pid=30389): Exception occurred processing WSGI script '/opt/chronam/conf/chronam.wsgi'.
[Fri Mar 07 13:59:09 2014] [error] [client 158.104.5.48] IOError: failed to write data

I haven't been able to  track down where it is coming from and what it is trying to write to.  There is no end user indication of this problem.  Any idea what this is from?



On Mar 7, 2014, at 10:32 AM, Brunton, David wrote:

There are a few scenarios where this might happen. If your database isn’t configured (in my.cnf for MySQL) to have enough memory allocated that it can load the whole index into memory, that will be the bottleneck. You’ll know because your slow query log will get filled up fast J If you’re not using a slow query log, that is a good place to start.
 
This can also happen if Solr doesn’t have enough memory. That configuration is a little bit weird, because it’s in the Jetty startup file, on Jetty launch:
 
 
Specifically, the JAVA_OPTIONS line. –Xms and –Xmx are both set to 2g by default, I think, but if you either don’t have that much memory or set it substantially lower than that, Solr can start paging memory.

Last, but not least, rendering the tiles on the fly is a big memory and CPU hog. The big thing we’ve done on our side to help with that is to put a big, beefy cache in front of the images that keeps the rendered images around for as long as possible. I am aware of different configurations people have used for this, using Varnish, NGINX, and Apache’s mod_cache. Any of them should help take the load off of your web server a bit.
 
Do any of these ring a bell?
 
From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Sara Amato
Sent: Friday, March 07, 2014 12:44 PM
To: [log in to unmask]
Subject: Re: High CPU usage
 
I'm seeing the same thing, about 85% CPU usage, and images (jp2 , ~2 M each)  are very slow to load and resolve (e.g. 5 - 6 seconds on the page view.)   We did up the memory, which was admittedly low, but that seemed to do little to improve the image display time. 
 
I'd love to see some suggestions on how to improve this, and also some minimum hardware requirements.  

 

On Tue, Feb 25, 2014 at 6:28 AM, Chuck Henry <[log in to unmask]> wrote:
Hi everyone,

I'm running Chronam at http://nyshistoricnewspaper.org.  So far so good. However, we're seeing a
pretty heavy load on the system. We're running Chronam on an Ubuntu 12.04 LTS VM with 4 CPUs and
16 GB of ram.  Now that the site is production we're seeing 85% CPU utilization all the time. The load
average runs around 20 during the work day. Google Analytics says we have around 4,000 visitors a
day with about 35,000 page views. We have about 2.5 million pages at the moment.

Looking at 'top' we have a bunch (10 or more) of apache2 processes running anywhere from 100M to
800M each using 15 to 20% cpu. Occasionally I've seen them peak at 1.2g with 32% cpu. On the
system side I've already unloaded unused apache2 mods, switched from mpm_prefork to
mpm_worker and tweaked those associated settings. It's helped a bit.

So at this point I assume that Chronam is utilizing that memory and cpu.  My questions are this:

Is this common for Chronam? Is this what others are seeing?

If not, is there something I can start looking at that might help reduce the CPU usage?

I do have the option of increasing CPU cores available to the VM but that's a last resort. I'd like to
make the application as slim as possible before going that route.

Thanks in advance for any assistance you can provide!

-Chuck Henry
 

--Apple-Mail=_C6711277-BF54-405F-918F-DD0202A61648-- ========================================================================Date: Wed, 19 Mar 2014 13:46:12 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: =?ISO-8859-1?Q?=?ISO-8859-1?Q?Chuck_Henry?=? <[log in to unmask]> Subject: Re: High CPU usage Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="ISO-8859-1" Sara, We're having the same error showing up in the apache error logs. [Sun Mar 16 08:46:51 2014] [error] [client 216.71.234.242] mod_wsgi (pid=18266): Exception occurred processing WSGI script '/opt/chronam/conf/chronam.wsgi'. [Sun Mar 16 08:46:51 2014] [error] [client 216.71.234.242] IOError: failed to write data [Sun Mar 16 08:46:52 2014] [error] [client 216.71.234.242] mod_wsgi (pid=18262): Exception occurred processing WSGI script '/opt/chronam/conf/chronam.wsgi'. Occurs around every 2 minutes or so. The IP address changes so I assume it's the IP of the visitor. ========================================================================Date: Wed, 19 Mar 2014 15:45:00 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Chris Adams <[log in to unmask]> Subject: Re: High CPU usage In-Reply-To: <[log in to unmask]> MIME-Version: 1.0 Content-Type: multipart/alternative; boundaryô6d0443063e5b98e604f4fae3df --f46d0443063e5b98e604f4fae3df Content-Type: text/plain; charset=UTF-8 This particular error almost always means that the remote HTTP client closed the connection before the server delivered the entire response. This is fairly common on the public web with unreliable network connections, buggy browsers, etc. and usually doesn't indicate an error. I usually recommend running a service like ChronAm behind a caching proxy like Varnish (http://varnish-cache.org/) or Traffic Server ( http://trafficserver.apache.org/) which will soak up a large percentage of dodgy client behaviour and usually also reduces the total server load. In this scenario, the response is delivered entirely to the cache even if the client disconnects and should they reconnect the response should be able to be delivered directly from the cache without the backend processing another request. Chris --f46d0443063e5b98e604f4fae3df Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
This particular error almost always means that the remote HTTP client closed the connection before the server delivered the entire response. This is fairly common on the public web with unreliable network connections, buggy browsers, etc. and usually doesn't indicate an error.

I usually recommend running a service like ChronAm behind a caching proxy like Varnish (http://varnish-cache.org/) or Traffic Server (http://trafficserver.apache.org/) which will soak up a large percentage of dodgy client behaviour and usually also reduces the total server load. In this scenario, the response is delivered entirely to the cache even if the client disconnects and should they reconnect the response should be able to be delivered directly from the cache without the backend processing another request.

Chris
--f46d0443063e5b98e604f4fae3df-- ========================================================================Date: Sat, 29 Mar 2014 10:38:18 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: =?windows-1252?Q?Paolo_Mattiangeli?= <[log in to unmask]> Subject: Installation problem on Ubuntu Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="windows-1252" Hello, I followed the installation instructions to test the Chronam environment on Ubuntu. When I try to run django-admin.py chronam_sync --skip-essays I get the message error: [Errno 111] Connection refused Is something wrong with the instructions, or am I missing something? Thank you