Comments on: How Much Data Modeling Is Enough? http://newengland2010.thatcamp.org/11/10/how-much-data-modeling-is-enough/ The Humanities and Technology Camp Sat, 13 Apr 2013 12:22:24 +0000 hourly 1 https://wordpress.org/?v=4.9.12 By: Peter Van Garderen http://newengland2010.thatcamp.org/11/10/how-much-data-modeling-is-enough/#comment-69 Fri, 12 Nov 2010 19:46:54 +0000 http://thatcampnewengland.org/?p=419#comment-69 Wish I could make this session Mark, great topic.

If I was attending I’d like to ask whether the archival community needs its own equivalent of the FRBR entity model? Almost all of the archival standards come with an *implicit* assumption of an underlying entity model that has never been formalized. Would this be an extension of FRBR or a brand-new ‘Archival Resource Entity Model’?

However, this may also be just another dead-end on the path to find the “One True Model to Rule Them All” yfrog.com/nejs8ubj. Perhaps using RDF triples, facets, key:value pairs, and/or noSQL technologies eliminates the need to have our data models represented by entity models altogether?

At any rate, I like your idea of establishing a framework or some common rules, even just agreement on syntax, for how we document & then iterate/version our archival resource data models. The standards bodies/processes (bogged down by their bureaucracy, volunteerism and international scope) are currently too slow to react to the technologies that are defining new requirements and capabilities for getting archival resources online (or even just catching up to years-old common XML practices). That said, I don’t want to dismiss the strength and legitimacy of community standards, we just need a better way to sync them (and have them informed by) any number of constantly evolving implementation models. Hopefully this THATcamp session can contribute to the discussion.

]]>
By: Christopher Gutteridge http://newengland2010.thatcamp.org/11/10/how-much-data-modeling-is-enough/#comment-68 Thu, 11 Nov 2010 16:19:26 +0000 http://thatcampnewengland.org/?p=419#comment-68 One thing I’m coming to hate is a schema that nearly does what I want but was made over specific.

We’ve made that mistake ourselves, making a class for members of our school, for example.

I’m currently working on a scheme for usefully describing places related to an organisation. Specifically that an event is in a room, with a building and that building is at this lat/long and the nearest public carparks are x,y and z with the following open hours…

Rather than start from scratch we’re probably going to mint very few new predicates or classes. Mostly we’ll just use GoodRelations, foaf and the like with guidelines of how to use them to make them useful for a consumer.

Ideally I’ll produce a validator so people can check it’s discoverable, parseable and saying what they meant to say.

]]>
By: Aaron Rubinstein http://newengland2010.thatcamp.org/11/10/how-much-data-modeling-is-enough/#comment-67 Wed, 10 Nov 2010 14:24:58 +0000 http://thatcampnewengland.org/?p=419#comment-67 I completely agree that, due to the power of OWL and RDFS, it is essential work for each modeling effort to find that sweet spot between complexity and simplicity. Perhaps we’re looking for a test framework and a set of best practices.

I would definitely love to talk about this more…

]]>