From: Bibliographic Framework Transition Initiative Forum [mailto:[log in to unmask]] On Behalf Of Gina Solares
Sent: Monday, January 14, 2019 12:30 PM
To: [log in to unmask]
Subject: [BIBFRAME] Program reminder: TSWEIG at ALA Midwinter 2019 - See you in Seattle!


***Please excuse cross-posting***


Please join the ALCTS Technical Services Workflow Efficiency Interest Group (TSWEIG) at the 2019 ALA Midwinter in Seattle, WA.


Date and time: January 28, 2019 (Monday), 1:00-2:00 PM

Location: Washington State Convention Center, Room 2A

Adapting Training Within Industry (TWI) model for technical services staff cross-training

By Sofia Slutskaya, Metadata Strategist, Georgia Tech Library

As a part of a multi-year Library Next project, Georgia Tech library has transformed its technical services department to include all “behind the scenes” functions from patron management, archival collections digitization to cataloging, acquisitions, and e-resource management. This transformation involved defining basic and advanced tasks, as well as the required skills for each area, mapping workflows and identifying efficiencies, creating standard work documentation, and engaging in intense class-room training coupled with on-the-job practice under the direction of a subject matter expert. Georgia Tech library was inspired by the Training within Industry (TWI) model of organizing training and cross-training technical services staff to perform new functions. TWI relies heavily on improving productivity by creating job instructions, involving skilled workers in deliver training, and “learning by doing.” This presentation will discuss successes and shortcoming of this project. Georgia Tech’s experience will be useful to any library that is engaged in re-thinking its technical services department, considering cross-training staff for multiple positions, or planning to seek models for change outside of the traditional library methodologies.

Reclaim Your Reclamation: A DIY Approach to Holdings Synchronization in WorldCat

By Erica Findley & Paul Lightcap, Multnomah County Library

This session describes and demonstrates how to complete a no-cost reclamation with OCLC using WorldShare Collection Manager, MarcEdit and OpenRefine, which sufficiently scales to be able to complete on a monthly basis with only an hour or two of staff time. This method empowers individual organizations to complete what once were costly and time-intensive projects with minimal staff time and no financial cost (beyond a library's OCLC subscription). Whether useful as a one-time project, such as before an ILS migration, or as an ongoing holdings synchronization that fits within a larger cataloging strategy including holdings maintenance, record merges, and record updates, this DIY approach applies the logic and straight-forward technical solutions used for tasks like title list comparisons to the challenge of catalog and holdings maintenance.

Linked Open Data Production and Publishing Workflow at the University of Washington Libraries

By Theodore Gerontakos, Crystal Clements, Ben Riesenberg, University of Washington Libraries

Repurposing existing digital collections metadata as static linked open data can be performed as a staff collaboration, but often staff members are not prepared. If available, a person knowledgeable in linked data can recruit a team and, as necessary, team members can learn on-the-fly. Face-to-face meetings can be an efficient way to select data models, make implementation decisions, and forge a successful workflow.

At the project’s outset a central focus is creating a map from the original data model to the target data model. Data is then exported and cleaned to optimize processability, after which scripts are written to convert the data as modeled. Application of data models is an iterative process and is often inaccurate at first, especially if the team is new to linked data. After the model is sufficiently complete and the scripts are run, multiple RDF serializations are produced. Before the datasets are published, staff need to clean the new data, for example performing identity management, and links to external datasets should be produced. Methods for producing the links can be challenging, but these enrichments are essential for all linked data endeavors, and several tools can be used.

Data can then be posted on a web server. When publishing static linked data, the central serialization can be HTML+RDFa, serving as a landing page for all versions of the data. This landing page can be assigned a persistent identifier and further processed and analyzed to optimize its visibility on the web. Additional tests should be performed following publishing, which can prompt additional changes. These changes can be incorporated into a finalized version of a local workflow intended for reuse.



Sent on behalf of TJ and Gina, TSWEIG Co-Chairs

TJ Kao Continuing Resources Metadata Professional | Interim Resource Description Coordinator Resource Description Group | GW Libraries & Academic Innovation | Gelman Library 104G [log in to unmask] | 202-994-1328 | ORCID ID: http://orcid.org/0000-0003-2958-4399 Gina Solares Head of Cataloging and Metadata Management Gleeson Library | Geschke Center, University of San Francisco (415) 422-5361 | [log in to unmask]