I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files. wget --recursive --no-host-directories --cut-dirs 1 --include-directories /data/batches/batch_vi_affirmed_ver01 http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01 This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! ========================================================================Date: Wed, 8 May 2013 16:16:07 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: "Brunton, David" <[log in to unmask]> Subject: Re: jp2's not downloading? In-Reply-To: <[log in to unmask]> Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Hi Mike, We don't currently have any intentional access restrictions in place. I was able to download this file: http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 from off-campus. What happens when you try (e.g. What error message or behavior from the browser or script?)? Best, David. ----- Original Message ----- From: Michael Beccaria [mailto:[log in to unmask]] Sent: Wednesday, May 08, 2013 03:44 PM Eastern Standard Time To: [log in to unmask] <[log in to unmask]> Subject: jp2's not downloading? I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files. wget --recursive --no-host-directories --cut-dirs 1 --include-directories /data/batches/batch_vi_affirmed_ver01 http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01 This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! ========================================================================Date: Wed, 8 May 2013 16:08:45 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Zachary Spalding <[log in to unmask]> Subject: Re: jp2's not downloading? In-Reply-To: <[log in to unmask]> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit I am experiencing the same problem myself, even a manual download from Firefox will not work. Zachary Spalding Systems Manager | Southeastern NY Library Resources Council (p)845-883-9065x11 | (f)845-883-9483 [log in to unmask] | http://www.senylrc.org Twitter: http://twitter.com/senylrc | Facebook: http://www.facebook.com/senylrc Blog: http://www.senylrc.org/blog On 5/8/2013 3:44 PM, Michael Beccaria wrote: > I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files. > > wget --recursive --no-host-directories --cut-dirs 1 --include-directories /data/batches/batch_vi_affirmed_ver01 http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01 > > This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited? > Thanks, > Mike > > Mike Beccaria > Systems Librarian > Head of Digital Initiative > Paul Smith's College > 518.327.6376 > [log in to unmask] > Become a friend of Paul Smith's Library on Facebook today! ========================================================================Date: Wed, 8 May 2013 20:26:09 +0000 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Michael Beccaria <[log in to unmask]> Subject: Re: jp2's not downloading? In-Reply-To: <[log in to unmask]> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 I get a perpetual "waiting for...[website]..." with my browser. I tried to download on an Ubuntu amazon ec2 server using wget and I hangs on the jp2 files. Everything else downloads very fast then it sits. Here's the last few files and it's sitting at the jp2: --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/?C=M;O=A Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=M;O=A' [ <=> ] 14,438 --.-K/s in 0.001s 2013-05-08 20:21:37 (26.8 MB/s) - `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=M;O=A' saved [14438] --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/?C=S;O=A Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=S;O=A' [ <=> ] 14,438 --.-K/s in 0s 2013-05-08 20:21:37 (40.1 MB/s) - `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=S;O=A' saved [14438] --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/?C=D;O=A Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=D;O=A' [ <=> ] 14,438 --.-K/s in 0s 2013-05-08 20:21:37 (291 MB/s) - `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=D;O=A' saved [14438] --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Brunton, David Sent: Wednesday, May 08, 2013 4:16 PM To: [log in to unmask] Subject: Re: jp2's not downloading? Hi Mike, We don't currently have any intentional access restrictions in place. I was able to download this file: http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 from off-campus. What happens when you try (e.g. What error message or behavior from the browser or script?)? Best, David. ----- Original Message ----- From: Michael Beccaria [mailto:[log in to unmask]] Sent: Wednesday, May 08, 2013 03:44 PM Eastern Standard Time To: [log in to unmask] <[log in to unmask]> Subject: jp2's not downloading? I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files. wget --recursive --no-host-directories --cut-dirs 1 --include-directories /data/batches/batch_vi_affirmed_ver01 http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01 This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! ========================================================================Date: Wed, 8 May 2013 20:33:13 +0000 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: "Trott Reeves, Louisa Jayne" <[log in to unmask]> Subject: Re: jp2's not downloading? In-Reply-To: <[log in to unmask]> Content-Type: multipart/alternative; boundary="_000_15F68A6FE6FEA541A146FBC3742856AC326A9653kmbx4utktenness_" MIME-Version: 1.0 --_000_15F68A6FE6FEA541A146FBC3742856AC326A9653kmbx4utktenness_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable I had problems downloading the jp2s from a browser a while ago. I had to change the default reader (for jp2s) to IrfanView. Don't know if this will help for the batch download but it worked for the 'manual' download via Firefox. Louisa ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Coordinator, Tennessee Newspaper Digitization Project University of Tennessee, Hodges Library, Room 652 1015 Volunteer Blvd., Knoxville, TN 37996 865-974-6913 http://chroniclingamerica.loc.gov/ TN project info and updates available here -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Zachary Spalding Sent: Wednesday, May 08, 2013 4:09 PM To: [log in to unmask] Subject: Re: jp2's not downloading? I am experiencing the same problem myself, even a manual download from Firefox will not work. Zachary Spalding Systems Manager | Southeastern NY Library Resources Council (p)845-883-9065x11 | (f)845-883-9483 [log in to unmask] | http://www.senylrc.org Twitter: http://twitter.com/senylrc | Facebook: http://www.facebook.com/senylrc Blog: http://www.senylrc.org/blog On 5/8/2013 3:44 PM, Michael Beccaria wrote: > I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files. > > wget --recursive --no-host-directories --cut-dirs 1 > --include-directories /data/batches/batch_vi_affirmed_ver01 > http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01 > > This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited? > Thanks, > Mike > > Mike Beccaria > Systems Librarian > Head of Digital Initiative > Paul Smith's College > 518.327.6376 > [log in to unmask] > Become a friend of Paul Smith's Library on Facebook today! --_000_15F68A6FE6FEA541A146FBC3742856AC326A9653kmbx4utktenness_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

I had problems downloading the jp2s from a browser a while ago. I had to change the default reader (for jp2s) to IrfanView. Don't know if this will help for the batch download but it worked for the 'manual' download via Firefox.

 

Louisa

 

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Coordinator, Tennessee Newspaper Digitization Project

University of Tennessee, Hodges Library, Room 652

1015 Volunteer Blvd., Knoxville, TN 37996

865-974-6913

http://chroniclingamerica.loc.gov/

TN project info and updates available here

 

 

-----Original Message-----
From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Zachary Spalding
Sent: Wednesday, May 08, 2013 4:09 PM
To: [log in to unmask]
Subject: Re: jp2's not downloading?

 

I am experiencing the same problem myself, even a manual download from Firefox will not work.

 

Zachary Spalding

Systems Manager | Southeastern NY Library Resources Council

(p)845-883-9065x11 | (f)845-883-9483

[log in to unmask] | http://www.senylrc.org

Twitter: http://twitter.com/senylrc | Facebook: http://www.facebook.com/senylrc

Blog: http://www.senylrc.org/blog

 

On 5/8/2013 3:44 PM, Michael Beccaria wrote:

> I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files.

> 

> wget --recursive --no-host-directories --cut-dirs 1

> --include-directories /data/batches/batch_vi_affirmed_ver01

> http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01

> 

> This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited?

> Thanks,

> Mike

> 

> Mike Beccaria

> Systems Librarian

> Head of Digital Initiative

> Paul Smith's College

> 518.327.6376

> [log in to unmask]

> Become a friend of Paul Smith's Library on Facebook today!

 

--_000_15F68A6FE6FEA541A146FBC3742856AC326A9653kmbx4utktenness_-- ========================================================================Date: Wed, 8 May 2013 17:02:46 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Chris Adams <[log in to unmask]> Subject: Re: jp2's not downloading? In-Reply-To: <[log in to unmask]> MIME-Version: 1.0 Content-Type: multipart/alternative; boundaryô6d04428d927b623404dc3b41be --f46d04428d927b623404dc3b41be Content-Type: text/plain; charset=UTF-8 I can confirm the slowdown. HTTP HEAD requests are quite fast: curl --head http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn960980762611/0002.jp2 HTTP GET requests are hanging before the start of the response headers: $ nc chroniclingamerica.loc.gov 80 GET /data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 HTTP/1.1 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.65 Safari/537.31 Host: chroniclingamerica.loc.gov Accept: */* (disconnect after many minutes) Chris --f46d04428d927b623404dc3b41be Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I can confirm the slowdown. HTTP HEAD requests are quite fast: 

curl --head http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn960980762611/0002.jp2

HTTP GET requests are hanging before the start of the response headers:

GET /data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 HTTP/1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_3) AppleWebKit/537.31 (KHTML, like Gecko) Chrome/26.0.1410.65 Safari/537.31
Accept: */*


(disconnect after many minutes)

Chris
--f46d04428d927b623404dc3b41be-- ========================================================================Date: Fri, 10 May 2013 08:59:35 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: "Brunton, David" <[log in to unmask]> Subject: Re: jp2's not downloading? In-Reply-To: <[log in to unmask]> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Hi all, I am also able to replicate the problem from an EC2 instance. When I downloaded successfully before on my LC handheld, I was apparently behind the right firewall or something :-\ We're working with our networking staff here to figure out what the problem is, and hope to have a resolution soon. Best, David. -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Michael Beccaria Sent: Wednesday, May 08, 2013 4:26 PM To: [log in to unmask] Subject: Re: jp2's not downloading? I get a perpetual "waiting for...[website]..." with my browser. I tried to download on an Ubuntu amazon ec2 server using wget and I hangs on the jp2 files. Everything else downloads very fast then it sits. Here's the last few files and it's sitting at the jp2: --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/?C=M;O=A Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=M;O=A' [ <=> ] 14,438 --.-K/s in 0.001s 2013-05-08 20:21:37 (26.8 MB/s) - `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=M;O=A' saved [14438] --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/?C=S;O=A Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=S;O=A' [ <=> ] 14,438 --.-K/s in 0s 2013-05-08 20:21:37 (40.1 MB/s) - `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=S;O=A' saved [14438] --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/?C=D;O=A Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=D;O=A' [ <=> ] 14,438 --.-K/s in 0s 2013-05-08 20:21:37 (291 MB/s) - `batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/index.html?C=D;O=A' saved [14438] --2013-05-08 20:21:37-- http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 Reusing existing connection to chroniclingamerica.loc.gov:80. HTTP request sent, awaiting response... Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Brunton, David Sent: Wednesday, May 08, 2013 4:16 PM To: [log in to unmask] Subject: Re: jp2's not downloading? Hi Mike, We don't currently have any intentional access restrictions in place. I was able to download this file: http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01/data/sn96096625/00280762611/0001.jp2 from off-campus. What happens when you try (e.g. What error message or behavior from the browser or script?)? Best, David. ----- Original Message ----- From: Michael Beccaria [mailto:[log in to unmask]] Sent: Wednesday, May 08, 2013 03:44 PM Eastern Standard Time To: [log in to unmask] <[log in to unmask]> Subject: jp2's not downloading? I'm having trouble downloading a sample batch to use in the software for testing. It hangs up with the jp2 files. wget --recursive --no-host-directories --cut-dirs 1 --include-directories /data/batches/batch_vi_affirmed_ver01 http://chroniclingamerica.loc.gov/data/batches/batch_vi_affirmed_ver01 This command downloads all the other files but stops and hangs at the jp2's. I also can't download them when I try in a web browser. Is there a reason for this? Is access being limited? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! ========================================================================Date: Fri, 10 May 2013 15:54:09 +0000 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Michael Beccaria <[log in to unmask]> Subject: Bug in language section Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 When loading a batch using the most recent software from Git on Ubuntu I kept getting an error about "language matching query does not exist." (see the paste below). When I added a field in the mysql database for "eng" I then got a solr error saying the required field wasn't found. So, I think this is because language is a required field in the schema.xml for solr AND because there is no default value of "eng" or English found in the mysql language database table when you initially are installing the software. I got it working by adding an "eng" field in mysql and by making the language value optional in the schema.xml. I don't know if this will cause problems down the road, but thought I would mention that it is an issue. Hope that helps Batch_loader.py Starting line: 428 for lang, text in lang_text.iteritems(): try: language = models.Language.objects.get(Q(code=lang) | Q(lingvoj__iendswith=lang)) except models.Language.DoesNotExist: # default to english as per requirement language = models.Language.objects.get(code='eng') ocr.language_texts.create(language=language, text=text) page.ocr = ocr Here's the output from my batch load before I changed the solr field to optional and added the "eng" field to mysql: (ENV)ubuntu@ip-10-119-97-242:/opt/chronam/data$ django-admin.py load_batch /opt/chronam/data/batches/batch_vi_affirmed_ver01 INFO:root:loading batch at /opt/chronam/data/batches/batch_vi_affirmed_ver01 INFO:chronam.core.batch_loader:loading batch: batch_vi_affirmed_ver01 INFO:rdflib:version: 3.4.0 INFO:chronam.core.views.image:NativeImage backend '%s' not available. INFO:chronam.core.views.image:NativeImage backend '%s' not available. INFO:chronam.core.views.image:Using NativeImage backend 'graphicsmagick' INFO:chronam.core.batch_loader:Assigned page sequence: 1 INFO:chronam.core.batch_loader:Saving page. issue date: 1886-07-17 00:00:00, page sequence: 1 ERROR:chronam.core.batch_loader:unable to load batch: Language matching query does not exist. ERROR:chronam.core.batch_loader:Language matching query does not exist. Traceback (most recent call last): File "/opt/chronam/core/batch_loader.py", line 166, in load_batch issue = self._load_issue(mets_url) File "/opt/chronam/core/batch_loader.py", line 283, in _load_issue page = self._load_page(doc, page_div, issue) File "/opt/chronam/core/batch_loader.py", line 405, in _load_page self.process_ocr(page) File "/opt/chronam/core/batch_loader.py", line 433, in process_ocr language = models.Language.objects.get(code='eng') File "/opt/chronam/ENV/local/lib/python2.7/site-packages/django/db/models/manager.py", line 131, in get return self.get_query_set().get(*args, **kwargs) File "/opt/chronam/ENV/local/lib/python2.7/site-packages/django/db/models/query.py", line 366, in get % self.model._meta.object_name) DoesNotExist: Language matching query does not exist. WARNING:root:no OcrDump to delete for batch_vi_affirmed_ver01 (Library of Virginia; Richmond, VA) ERROR:chronam.core.management.commands.load_batch:unable to load batch: Language matching query does not exist. Traceback (most recent call last): File "/opt/chronam/core/management/commands/load_batch.py", line 39, in handle batch = loader.load_batch(batch_name) File "/opt/chronam/core/batch_loader.py", line 195, in load_batch raise BatchLoaderException(msg) BatchLoaderException: unable to load batch: Language matching query does not exist. Error: unable to load batch. check the load_batch log for clues Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! ========================================================================Date: Tue, 14 May 2013 13:23:45 +0000 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Michael Beccaria <[log in to unmask]> Subject: Templates and Static Files Questions Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Can someone provide some insight on best practices in making or implementing an existing template/static files for ndnp? I'm pretty new to the staticfiles commands but have read the docs to get an idea of how it works. With that is it best to replace the /core/static files and run the collectstatic command or perhaps should add a STATICFILES_DIRS variable to the settings files? How is this best accomplished? As an example, is there an easy way to implement the loc template that exists in the download files? What is the process to do that? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] ========================================================================Date: Fri, 17 May 2013 10:23:16 -0400 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: "Brunton, David" <[log in to unmask]> Subject: Re: Templates and Static Files Questions In-Reply-To: <[log in to unmask]> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Hi Michael, I just pushed a change to the github repository that creates an example application called "example" at the same level as "core". It's a silly example that makes my face the logo of the application and changes a couple of the menu items. It should give you an idea of how this is done. I also created a wiki page to document some of this work here: https://github.com/LibraryOfCongress/chronam/wiki/Extending-ChronAm I had a short conversation with some of the other developers about how to do this, but we'd really like to flesh it out in a way that works for you, so once you've tried it, please give us a shout about any obstacles you encounter. Also, if you find it doesn't work, you might comment on ticket #44: https://github.com/LibraryOfCongress/chronam/issues/44 Hope it works for you. Best, David. -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Michael Beccaria Sent: Tuesday, May 14, 2013 9:24 AM To: [log in to unmask] Subject: Templates and Static Files Questions Can someone provide some insight on best practices in making or implementing an existing template/static files for ndnp? I'm pretty new to the staticfiles commands but have read the docs to get an idea of how it works. With that is it best to replace the /core/static files and run the collectstatic command or perhaps should add a STATICFILES_DIRS variable to the settings files? How is this best accomplished? As an example, is there an easy way to implement the loc template that exists in the download files? What is the process to do that? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] ========================================================================Date: Thu, 23 May 2013 12:56:28 +0000 Reply-To: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> Sender: "Data, API, website, and code of the Chronicling America website" <[log in to unmask]> From: Michael Beccaria <[log in to unmask]> Subject: Re: Templates and Static Files Questions In-Reply-To: <[log in to unmask]> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 David, Your approach worked very well and was understandable. Thank you! I am, however, running into some hurdles farther down the line. I am upgrading a site built on the old version of ndnp (http://nyshistoricnewspapers.org/) to the new version and, as you know, a lot has changed in terms of the way the software works. You all made some nice changes and simplified things (templates especially...thank you!). As you can tell, I build a new template based off of the loc template that existed with the old version of the software but the new core template has a very different look. I'm running into some hurdles making sure that the urls.py matches the correct views that match the correct template etc. For example, the old template had the advanced search page forms available on the home page but the new template does not. I am in the process of going through all of the views and processors trying to knit it back together again. With that being said, I noticed that there was an loc app in the files but that app also references an "lc" app that is commented out in the settings file and doesn't exist in the repository. I couldn't get it to work. I assume that this is the loc template made functional on the new version of the software? It would make my life MUCH easier if I was able to modify that loc template again rather than try to piece it all together from the core template. Can the loc templates be made available or is it proprietary to the loc? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask] Become a friend of Paul Smith's Library on Facebook today! -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Brunton, David Sent: Friday, May 17, 2013 10:23 AM To: [log in to unmask] Subject: Re: Templates and Static Files Questions Hi Michael, I just pushed a change to the github repository that creates an example application called "example" at the same level as "core". It's a silly example that makes my face the logo of the application and changes a couple of the menu items. It should give you an idea of how this is done. I also created a wiki page to document some of this work here: https://github.com/LibraryOfCongress/chronam/wiki/Extending-ChronAm I had a short conversation with some of the other developers about how to do this, but we'd really like to flesh it out in a way that works for you, so once you've tried it, please give us a shout about any obstacles you encounter. Also, if you find it doesn't work, you might comment on ticket #44: https://github.com/LibraryOfCongress/chronam/issues/44 Hope it works for you. Best, David. -----Original Message----- From: Data, API, website, and code of the Chronicling America website [mailto:[log in to unmask]] On Behalf Of Michael Beccaria Sent: Tuesday, May 14, 2013 9:24 AM To: [log in to unmask] Subject: Templates and Static Files Questions Can someone provide some insight on best practices in making or implementing an existing template/static files for ndnp? I'm pretty new to the staticfiles commands but have read the docs to get an idea of how it works. With that is it best to replace the /core/static files and run the collectstatic command or perhaps should add a STATICFILES_DIRS variable to the settings files? How is this best accomplished? As an example, is there an easy way to implement the loc template that exists in the download files? What is the process to do that? Thanks, Mike Mike Beccaria Systems Librarian Head of Digital Initiative Paul Smith's College 518.327.6376 [log in to unmask]