The Contemporary World: The Globalization of World Politics
Resource description, discovery, and metadata for Open Educational Resources
1. Resource description, discovery, and metadata for Open Educational Resources R. John Robertson, Phil Barker & Lorna Campbell OER 10, Cambridge, 22nd-24th March 2010 This work is licensed under a Creative Commons Attribution 2.5 UK: Scotland License.
2. Overview UKOER and JISC CETIS Stakeholders 6 tensions in description and metadata Where next?
3. Purpose To begin to provide an overview about how the UKOER projects have approached describing educational resources To highlight issues relating to description that should be considered when sharing learning resources 3
27. Description for your use vs. description for sharing (4/4) Do standards help or hinder this decision? Mostly irrelevant Exist in underlying systems Export in a given standard can be mapped Tools hide standards However, perceptions about standards do play a role Jorum uses ‘X’ so we’ll use it; ‘X’ has a space to describe this feature
28. Metadata standards vs other forms of description Most projects are creating metadata For some projects license information only in the metadata But others are not using any formal descriptive standard Does full text indexing eliminate the need for keywords? audio, video, image, and flash materials as well keywords and tags are very useful for aggregators Do we need metadata if we have a cover page (or vice versa)? Use of cover pages is not yet fully known but it appears to not be a major feature.
29. SEO vs. description for specialized discovery tools (1/3) Specialized discovery tools include: format-based tools like Vimeo, YouTube, Slideshare and Scribd aggregators like DiscoverEd and OERCommons subject or domain repositories (such as Jorum)
30. SEO vs. description for specialized discovery tools (2/3) Specialised tools often require domain specific terminology and their search indexing can reward comprehensive description – e.g. Use of MESH. Specialised tools may restrict the fields of descriptive information that can be supplied or that will be used. There is therefore a temptation to put everything into the fields which are available.
31. SEO vs. description for specialized discovery tools (3/3) SEO is more of an arcane art; the mmtv project found that too many high value terms (teacher-training, online, education) in a description diluted the page’s ranking. It’s better to be highly-ranked in a few terms Perhaps not so much of a tension as a balance between comprehensiveness and selectivity is required. OER producers need to be good at both.
32. Rich metadata vs. thin metadata (1/2) How much metadata do you need to create? How much of it is actually used? No answer to this yet programme was deliberately not prescriptive Jorum’s deposit tool expands on this
33. Rich metadata vs. thin metadata (2/2) Different projects have taken different approaches to description. OpenStaffs: LOM, XCRI ADOME: DC Most projects using metadata seem to have taken a light approach. No clear answers yet Medev OOER project survey about the use of description for learning materials out soon Longer term balance informed by: efforts to track usage and discovery of UKOERs the usability of this material when aggregated in Jorum
34. Specialist vs. generic standards: description Dublin Core: 15 projects LOM: 9 projects QTI: 9 projects In most cases it seems to relate to the metadata options which the software chosen provides Longer term comparative volume of use (number of OERs) which elements used
35. Specialist vs. generic standards: packaging Content Packaging: 10 projects 3 projects choosing to use it. Zip: 2 projects But this figure doesn’t reflect use –too obvious to record. Default support by tools and project team background seems to be key factor Perceptions of the available content package creation tools plays a role.
36. RSS/Atom based dissemination vs. OAI-PMH based dissemination What tools, services, and communities can take advantage of each dissemination approach? most of aggregators of learning resources are based exclusively around RSS/ATOM or support both RSS/ATOM and OAI-PMH. existing OAI-PMH harvesters are firmly focused on the Scholarly Communications community Are there any inherent difficulties in either approach? Both have problems Steer to use RSS/ATOM and many projects using technologies that doesn’t support OAI-PMH.
37. Summary thoughts The UKOER programme so far: Many diverse choices Thus far no one clear right answer Next steps Ongoing synthesis Tracking work Jorum usage statistics
This work is licensed under a Creative Commons Attribution 2.5 UK: Scotland License.
UKOER and JISC CETISStakeholders6 tensions in description and metadataWhere next?
To begin to provide an overview about how the UKOER projects have approached describing educational resourcesTo highlight issues relating to description that should be considered when sharing learning resources
Please note: Logos may be under different licences – their respective owners policies should be consulted before their use. UKOER “Between April 2009 and April 2010, JISC and the Academy are supporting pilot projects and activities that support the open release of learning resources; for free use and repurposing worldwide.” http://www.jisc.ac.uk/oerJISC CETIS “JISC CETIS is an Innovation Support Centre for UK Higher and Post-16 Education sectors funded by the Joint Information Systems Committee (JISC), and managed by the University of Bolton. The Centre provides strategic advice to JISC, supports its development programmes, represents the sector on international standardisation bodies and works with the educational community to facilitate the use of standards-based e-learning.” http://jisc.cetis.ac.uk//about
AggregatorsJORUMOthersIndependent learnersOn related course elsewhereTruly independentEnrolled studentsOn original courseOn other coursesEmployers and the marketplaceTraining benefits?
Describing resources requires both time and money, and as a result many projects need to choose what descriptive information they create.This choice often needs to balance the needs of their immediate users of system and requirements of taking part in wider networks and unknown users. For example, a using course codes for your system and a generic vocabulary for sharing (JACS)
How do OER initiatives decide what descriptive information they need?The programme told them: http://blogs.cetis.ac.uk/lmc/2009/03/30/metadata-guidelines-for-the-oer-programme/Then what happened?
What are the key influences on descriptive choices?The previous experience of project team (and support/ programme)Technology used at the institutionJorum’s (national repository of teaching and learning resources) requirements (or perception of them)
Do standards help or hinder this decision?In themselves, mostly irrelevant at this pointExist in underlying systemsExport in a given standard can be mapped from stored descriptive informationDeposit tool (rightly) hides standardsHowever, perceptions about their use do play a roleJorum uses ‘X’ so we’ll use it; ‘X’ has a space to describe this feature of a resource, that would be useful, so we’ll do that
Most projects are creating metadataFor some projects license information only in the metadataBut others are not using any formal descriptive standardDoes full text indexing eliminate the need for keywords?materials also in audio, video, image, and flash so full text is at best a partial solutionKeywords and tags are very useful for aggregator servicesDo we need metadata if we have a cover page (or vice versa)?The use of cover pages is not yet fully known but it appears to not be a major feature.More prevalent in institutional strand projects
Specialized discovery tools include: format-based tools like Vimeo, YouTube, Slideshare and Scribdaggregators like DiscoverEd and OERCommonssubject or domain repositories (such as Jorum)Specialised tools often require domain specific terminology and their search indexing can reward comprehensive description – e.g. Use of MESH. Specialised tools may restrict the fields of descriptive information that can be supplied or that will be used. There is therefore a temptation to put everything into the fields which are available.
Specialized discovery tools include: format-based tools like Vimeo, YouTube, Slideshare and Scribdaggregators like DiscoverEd and OERCommonssubject or domain repositories (such as Jorum)Specialised tools often require domain specific terminology and their search indexing can reward comprehensive description – e.g. Use of MESH. Specialised tools may restrict the fields of descriptive information that can be supplied or that will be used. There is therefore a temptation to put everything into the fields which are available.
SEO is more of an arcane art; the mmtv project found that too many high value terms (teacher-training, online, education) in a description diluted the page’s ranking. It’s better to be highly-ranked in a few termsPerhaps not so much of a tension as a balance between comprehensiveness and selectivity is required. OER producers need to be good at both.
This tension relates to questions around:How much metadata do you need to create?How much of it is actually used?No answer to this yetAs outlined earlier the programme was deliberately not prescriptive about the descriptive requirements for OERs (beyond a basic set of information); Jorum’s deposit tool expands on this somewhat but has fairly minimal requirements compared to many early e-learning initiatives and the descriptive possibilities included in many standards for describing learning resources.
Different projects have taken different approaches to description. For example:OpenStaffs: LOM, XCRIADOME: DC No clear answers yet, but the Medev OOER project carried out a survey about the use of description for learning materials. The survey is out soon and will provide some useful results from a user’s perspective.Our impression is that most projects using metadata have taken a light approach. In the longer term the right balance for this tension will be informed by: efforts to track usage and discovery of UKOERsthe usability of this material when aggregated in Jorum
There is widespread use of both specialist and generic standardsUse of Dublin Core: 15 projectsUse of LOM: 9 projectsUse of QTI: 9 projectsIn most cases it seems to relate to the metadata options which the software chosen providesHowever, after the programme the comparative volume of use (number of OERs) with each standard and which elements form each standard have been used, will offer a clearer picture
Use of Content Packaging: 10 projectsBut little deliberate use often export function of software. 3 projects choosing to use it.Use of Zip: 2 but... this figure doesn’t reflect use – taken as too obvious to record.Again, default support by tools and project team background seems to be key factor, in the choice of standards. However, the perceptions of the available content packaging creation tools also plays a role.
We originally considered there to be two possible questions:What tools, services, and communities can take advantage of each dissemination approach?Are there any inherent difficulties in either approach?Projects given a strong steer to use RSS/ATOM. Many projects are not using technologies that support OAI-PMH.However, it became clear during the project that most of aggregators of learning resources are based exclusively around RSS/ATOM or support both RSS/ATOM and OAI-PMH. Furthermore, the existing OAI-PMH harvesters are firmly focused on the Scholarly Communications community.
The UKOER programme so far:Many diverse choicesThus far no one clear right answerNext stepsOngoing synthesisTracking workJorum usage statistics
http://wiki.cetis.ac.uk/Educational_Content_OERhttp://jisc.cetis.ac.uk//topic/oerContact detailsrobert.robertson at strath.ac.ukLmc at strath.ac.ukPhilb at icbl.hw.ac.uk