WikiD might be of interest here. Essentially, it is an open-source J2EE
webapp that provides a wiki-style interface to structure data (e.g.
METS, MARC, etc.). Until now, the focus has been on metadata, but in
theory it can accommodate any kind of digital content.
You can see the demo at http://alcme.oclc.org/wikid/. The project page
is at http://www.oclc.org/research/projects/wikid/default.htm. If it
looks interesting, I can provide more details and perhaps even produce a
simple prototype relative to your needs.
> -----Original Message-----
> From: Metadata Encoding and Transmission Standard
[mailto:[log in to unmask]] On
> Behalf Of Jody DeRidder
> Sent: Tuesday, November 29, 2005 2:18 PM
> To: [log in to unmask]
> Subject: [METS] METS display & search/retrieval
> Hi, Brian (and others!)
> Thank you for your leads, and your considered thoughts. We are
> options for METS viewers and search/retrieval of mixed formats with
> descriptive metadata. If possible, we would like to use (or build on)
> source software, so any suggestions you may have for what's available
> be welcome.
> In addition, we would love to see how your search/retrieval/display
> interface works, so we can better judge what we want as an end product
> our own collections. Could you send a link or two as examples of what
> current interface has to offer? (I notice CDL has several
> collections/services, and would like to know where to look to see what
> consider to be the best examples.)
> Melanie here is indeed developing METS with child metadata, (and
> OCR'd text for search & retrieval of textual images) and we struggle
> work out how best to deliver. :-) The software we are currently
> may not be a workable solution.
> Thanks for any and all leads and suggestions!
> University of Tennessee
> > What we have done at CDL in our access repository based on
> > sort of like your option 3... The system we are using was designed
> > full text searching. If an object has a TEI or an EAD that is
> > then the METS is mostly ignored at indexing. If the object is
> > like a photo scrapbook or a simple image with no TEI or EAD, then
> > is indexed. The top level DMD (actually we cheat and assume the
> > DMD with xmlData is for the parent object) is turned into the dublin
> > core metadata for the METS document as a whole; and where there is
> > DMD -- that is all indexed as if its the text of the document. This
> > way, a metadata+text keyword search will find hits on words in the
> > metadata and we will see "snippets" in the search results because
> > search results think they are hits in the text of the textsless
> > document. I may be able to get it so that clicking on the faux-text
> > in the snippet will open up the page viewer to the page where the
> > in the metadata.
> > Some people have objected that this approach blurs the line between
> > content and metadata because metadata is in the "text" index. I
> > this is only a temporary solution, ideally you could search across
> > different levels of discovery granularity, and limit to specific
> > at any level.
> > Also, most of our METS are at the level of the individual item /
> > album with one descriptive record for the whole entity. One reason
> > went to METS was because people were not satisfied with searching
> > finding aids and having to dig down the description of subordinate
> > components (dsc) to find the digitized item. They wanted an item
> > search of all the digital objects. Now that we have started to
> > submissions of more complex METS, we are seeing more and more cases
> > where significant portions of the dsc hierarchy has been moved to
> > METS structMap -- and we have the same issue of people wanting to
> > / retrieve on different levels of the hierarchy, because most of the
> > item level cataloging is buried in deep metadata.
> > -- Brian
> > On Mon, 2005-11-21 at 06:46, mfeltner wrote:
> >> I am working on a small cultural heritage collection that features
> >> scrapbooks and photo albums from the early to mid-1900s. This
> collection is
> >> the first at our institution to utilize METS for complex objects.
> Given how
> >> new we are to METS, we're still feeling out how to make best use of
> -- as
> >> well as cope with the limitations of our digital content management
> >> software.
> >> For each scrapbook/album, I am creating METS records featuring two
> levels of
> >> descriptive metadata: (1) a parent DMD for the object as a whole;
> >> child DMDs for the many individual photos/drawings on the pages.
> >> project is particularly fortunate to have a historian on board,
> >> allowed us to create rich descriptive records for most individual
> photos in
> >> albums and scrapbooks. Perhaps the most important feature of these
> >> is
> >> the identification of people in photos. These names are obviously
> >> captured in the child DMD for each photo, rather than the parent
> >> I am curious how others working with similar materials are
> >> many
> >> descriptive metadata records within a single METS file. I would
> to see
> >> these records exploited to their fullest capacity for search and
> >> but am unsure what would be the best scenario to make that happen.
> >> system
> >> breaks METS objects into their many component objects. What this
> >> resource discovery is that child objects as well as parent METS
> >> searched and retrieved. So a search that matches a child DMD will
> >> that component image file and child DMD, as well as the entire METS
> >> and
> >> parent DMD. For those of you dealing with complex, image-based
> >> like
> >> albums and scrapbooks, how are you allowing your many DMDs to be
> >> and
> >> retrieved?
> >> Given our specific software in mind, it looks as if our collection
> >> at
> >> least three options:
> >> 1. Allow only parent DMDs to be searched/retrieved through resource
> >> discovery,
> >> but allow child DMDs to be viewed as the user pages through the
> >> as
> >> a whole.
> >> This kind of functionality might be possible if we can deactivate
> >> search/retrieval of child DMDs in our software. According to this
> >> the child DMDs would *not* function as *access* points, but could
> >> additional information if a user finds a particular photo/drawing
> >> he/she would like more detail. One particular problem this raises
> >> inability/difficulty of finding photos of specific people that are
> >> in
> >> albums/scrapbooks through the search interface. For example, if
> >> searches
> >> Roosevelt and a scrapbook contains a picture of Roosevelt, but that
> name is
> >> only captured in a child DMD, resource discovery will not retrieve
> >> image
> >> or scrapbook.
> >> 2. Allow both parent and child DMDs (and corresponding objects) to
> >> searched
> >> and retrieved.
> >> This is the current functionality supported by our software. Using
> >> previous example of searching Roosevelt, this would result in both
> >> specific image of Roosevelt being retrieved (with this record
> >> that
> >> this child object is part of a particular scrapbook), as well as
> >> scrapbook
> >> as a whole. Even if the relationship to the parent is specified in
> >> child
> >> DMD, do you think this could be confusing for users?
> >> 3. Allow both parent and child DMDs to be searched, but retrieve
> >> parent METS object.
> >> Actually, I'm not even sure if this is possible in our software,
> >> always ask for enhancements, right?
> >> Using the Roosevelt example again, this would result in the full
> >> being retrieved for this query. The parent DMD for the scrapbook,
> >> mentions nothing of Roosevelt, so this might result in
> >> for the user. They might interpret this as a false hit or
> >> tired
> >> of paging through the scrapbook looking for a needle in a haystack,
> >> were.
> >> Unfortunately, our software does not include functionality that
> >> the scrapbook to be retrieved but opened to the particular page on
> >> Roosevelt is pictured. This, to me, would be the best option, as
> access to
> >> the individual item would be preserved, but the item would also
> >> viewed outside its original context within the scrapbook.
> >> Any comments/feedback on these options would be greatly
> Do any
> >> of these three sound better/worse than the others? Can anyone
> >> alternative scenarios that would better utilize our metadata and
> >> access to important pieces of a whole?
> >> Many thanks,
> >> Melanie
> >> --------------
> >> Melanie Feltner-Reichert
> >> Digital Coordinator
> >> IMLS Funded Digital Collection:
> >> "From Pi Beta Phi to Arrowmont"
> >> John C. Hodges Library
> >> University of Tennessee
> >> Knoxville, Tennessee
> >> Email: [log in to unmask]