Evolving and emerging scholarly communication services in libraries: public access compliance and research impact
1. Evolving and emerging scholarly
communication services in libraries:
public access compliance and
research impact
Claire Stewart
Associate University Librarian for Research & Learning
University of Minnesota
Guest lecture, Dominican University LIS772
February 20, 2016
2. What do libraries do?
Libraries preserve knowledge, provide
access to knowledge, and support
creation of new knowledge
Libraries develop solutions to
information problems
19. Public Access Compliance Monitor
Expanded
Access
Health Sciences Libraries
My NCBI Awards View
NIH Manuscript Submission System
20. Submission Methods
Journal, by contract
with NIH, deposits the
published version of all
NIH-funded articles in
PMCal
Author reviews and
approves the PMC-
formatted manuscript.
PMCID is assigned
Author arranges with
Publisher to deposit
published version of
specific NIH-funded
article in PMC
NIHMS sends author an
email asking author to
approve the submitted
materials for processing
NIHMS sends author an
email asking author to
approve the submitted
materials for processing
Author confirms the
article is deposited in
PMC
Author reviews and
approves the PMC-
formatted manuscript.
PMCID is assigned
Author or delegate
submits final peer
reviewed manuscript to
the NIHMS
Journal publisher
submits final peer
reviewed manuscript to
the NIHMS
Credit (and previous slide): Katherine Chew, UMN Health Sciences Libraries
21. Extending support
• Baseline from Libraries: extend and continue
advisory/education role for articles and data,
curation and repository services
• Charged a new team to work with Sponsored
Projects Administration (SPA)
– Monitor agency plans as they are finalized
– Test systems and processes
– Develop a new joint education program, staffing
recommendations
22. Complex compliance picture
• NIH gap between voluntary (2005), mandatory
(2008), funding impacts (2013). Will other
agencies follow this pattern?
• Will Sponsored Projects Administration (SPA)
have to work with more than 75 different
systems?
• Administrative burden a significant concern: how
can libraries, research administration, scholars
work together to address?
• How will journals/publishers respond?
26. Interest in impact and metrics
• In hiring, in support of promotion & tenure
• Funders and publishers, evaluating proposals
• Institutional productivity
• Impact on our communities
27. What do we want to know when we
talk about impact?
• How has this [researcher’s] work advanced
knowledge?
• Has this research been evaluated and by whom?
• What is field shaping research?
• Who are the researchers shaping my field?
• What is going on in my field that’s important, or
in a field that could benefit my work?
• What is the broader societal benefit of this work?
(value of higher education, research investments)
30. JIF: Journal Impact Factor
Source: The Thompson Reuters Impact Factor
Significant variance across
disciplines:
• Top ranked journal overall:
JIF = 144.800
• Top ranked journal in history:
JIF = 2.615
Not based on any single
author/article
Oft criticized (DORA, Leiden,
HEFCE statements)
31. eigenfactor
Based on same citation source as the Impact Factor (Thomson’s Journal
Citation Reports)
Weights journals by importance based on citation frequency, similar to
Google page rank
Also calculates an Article Influence score, over the first 5 years of an article
32. h index
Scholar-specific:
“A scientist has index h if h of his or her Np papers have
at least h citations each and the other (Np – h) papers
have ≤h citations each.”
Dependent on citation index source (Google scholar
and Scopus might have different values)
Doesn’t really account for different citation/usage
patterns between fields
Source: Hirsch, J. E. “An Index to Quantify an Individual’s Scientific Research Output.” Proceedings of the
National Academy of Sciences of the United States of America 102, no. 46 (November 15, 2005): 16569–72.
doi:10.1073/pnas.0507655102.
33. Altmetrics
• Shares and mentions in non-traditional places,
on social media, etc. (twitter, FB, Mendeley,
blogs, wikipedia etc.)
• Often dependent on identifiers (DOIs,
PubMedIDs, arXivIDs, etc.) which can have
lower penetration in arts & humanities fields
36. Recent expressions of concern about
strictly quantitative approach to research
assessment
37. Advice to HEFCE (UK)
Framework for responsible
metrics
• Robustness: use the best
possible data
• Humility: quantitative should
support expert assessment
(e.g., peer review)
• Transparency: be able to show
where data came from & let
results be verified
• Diversity: account for
variation by field
• Reflexivity: indicators updated
as the system & effects change
20 specific recommendations to
HEFCE around use of metrics
38. What are the other questions we will
want to ask?
And what kinds of information will we need to
answer these questions?
39. How was it evaluated? (r)evolutions in
peer review
Open (PeerJ) and Post-publication (f1000)
41. Source: Weinberg, Bruce A., Jason Owen-Smith, Rebecca F. Rosen, Lou Schwarz,
Barbara McFadden Allen, Roy E. Weiss, and Julia Lane. “Science Funding and Short-
Term Economic Activity.” Science 344, no. 6179 (April 4, 2014): 41–43.
doi:10.1126/science.1250055.
Where was CIC federal research funding $ actually spent?
42. IRIS sample products, November 2015
Where did our grad students, postdocs and other research staff
find employment?
43. Across all grants & agencies, what are the kinds of research activities underway at UMN?
44. What are the other questions we will
want to ask?
What do we know about how conversations about research
happen? Who do astrophysicists talk to?
Holmberg, Kim, Timothy D. Bowman, Stefanie Haustein, and Isabella Peters. “Astrophysicists’ Conversational
Connections on Twitter.” PLoS ONE 9, no. 8 (August 25, 2014): e106086. doi:10.1371/journal.pone.0106086.
Table 1. Roles of the users mentioned in the tweets.
Figure 2. Number of people contacted and the number of conversations had by the 32 astrophysicists.
Figure 5. Conversational connections in the astrophysicists’ tweets.
45. What are the other questions we will
want to ask?
Who at UMN is doing research in or about
countries other than the United States? Who are
they collaborating with? What kind of effect has
this work had?
‘Effect’ could include: articles, books and reports published,
presentations offered, information about who benefited from
these outputs, integration into policy development
(conversations about and/or new legislation, regulation, etc.)
46.
47. Why is this hard?
• Wide variety in what constitutes a valuable
research output/indicator across disciplines
• Types of outputs expanding
• Data about research outputs is messy partly
because it has the typical big data problems:
volume, velocity, variety
• Highly distributed scholarly communication
infrastructure (the data about outputs is
everywhere)
48. Data on research outputs is messy:
inconsistent use of identifiers, etc.
Variant author names
49. Highly distributed scholarly communications
ecosystem
A purely hypothetical picture of where different research outputs might be stored & disseminated
54. Outputs/indicators vs metrics
“The observations here relate to the fact that while there
is unease about the use of metrics as a mode of
‘measuring’ the excellence of research produced in the
UK’s HEIs, the rich array of data presented as part of
REF2014 demonstrates that the arts and humanities
sector are comfortable with deploying numbers (albeit
framed as data rather than metrics) to present a case
about the excellence of their research cultures.”
Mike Thelwall, and Maria M Delgado. “Arts and Humanities Research Evaluation: No Metrics Please,
Just Data.” Journal of Documentation 71, no. 4 (June 25, 2015): 817–33. doi:10.1108/JD-02-2015-
0028.
56. Why the Libraries?
• This is an information problem
• We have the data: metadata and full text
(or at least we know where it is and how to get it)
• We understand the science of the data (metadata
in particular)
• We’re really into standards
• We are discipline neutral
• We have strong technology expertise