Journal-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of information abundance (often termed ‘information overload’), having a shorthand for the signals for where in the ocean of published literature to focus our limited attention has become increasingly important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies for multidimensional concepts such as research quality or impact. Each metric may offer a different emphasis based on its underlying data source, method of calculation, or context of use. For this reason, Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than one research metric as the quantitative input. This second rule acknowledges that performance cannot be expressed by any single metric, as well as the fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide a more complete picture and reflect different aspects of research productivity and impact in the final assessment. ( Elsevier)
Presentation on journal suggestion tool and journal findershilpasharma203749
This document discusses journal finding and suggestion tools that can help researchers identify appropriate journals to publish their articles. It defines what academic journals are and their purpose. It then describes several online tools, like Edanz Journal Selector, Elsevier Journal Finder, EndNote Manuscript Matcher, and Springer Journal Suggester, that use keywords, titles, and abstracts to match articles to relevant journals based on the journal's scope, audience, and other factors. The document advises researchers to verify a journal's aims and author instructions before submitting to ensure their article is a good fit.
Impact factor of Journal as per Journal citation report, SNIP, SJR, IPP, Cite...Omprakash saini saini
The document discusses several metrics for evaluating journals:
- Cite Score measures citations received over a 3-year period divided by number of published items in Scopus.
- Impact Factor from Journal Citation Reports measures average citations over a 2-year period.
- SNIP accounts for differences in citation behavior between fields using a source normalization approach.
- SJR measures influence based on weighted citations from prestigious journals over 3 years.
- Impact per Publication calculates citations in a year divided by number of publications in the prior 3 years.
Open Access (OA) is a system provide access to knowledge resources with free of cost and other restrictions. This PPT answer to the questions what, why, types, benefits etc. and also describes the creative commons licensing, concept of predatory journals, open access journals, and Sharpa RoMeO.
This document provides an overview of various bibliometric products and metrics that can be used to measure research impact, including journal impact factor, h-index, citation counts, and journal/article ranking tools from Journal Citation Reports, Scopus, and Google Scholar. It discusses the purpose and calculations of metrics like impact factor, eigenfactor, and source normalized impact per paper (SNIP). It also covers limitations of bibliometrics and recommends using multiple metrics and tools to evaluate research. Exercises are provided to help understand how to analyze journals, articles, and individual researchers using different bibliometric resources.
The document discusses publication ethics, including defining authorship, avoiding plagiarism and fabrication, managing conflicts of interest, and addressing misconduct. It introduces guidelines from organizations like COPE and WAME that provide best practices for publication ethics. Adhering to ethical standards is important to ensure high-quality scientific research and public trust in findings. Journals have processes to identify and handle cases of unethical behavior.
Redundant, Duplicate and Repetitive publications are the most important concerns in the scientific research/literature writing. The occurrence of redundancy affects the concepts of science/literature and carries with it sanctions of consequences. To define this issue is much challenging because of the many varieties in which one can slice, reformat, or reproduce material from an already published study. This issue also goes beyond the duplication of a single study because it might possible that the same or similar data can be published in the early, middle, and later stages of an on-going study. This may have a damaging impact on the scientific study/literature base. Similar to slicing a cake, there are so many ways of representing a study or a set of data/information. We can slice a cake into different shapes like squares, triangles, rounds, or layers. Which of these might be the best way to slice a cake? Unfortunately, this may be the wrong question. The point is that the cake that is being referred to, the data/ information set or the study/findings, should not be sliced at all. Instead, the study should be presented as a whole to the readership to ensure the integrity of science/technology because of the impact that may have on patients who will be affected by the information contained in the literature/findings. Redundant, duplicate, or repetitive publications occur when there is representation of two or more studies, data sets, or publications in either electronic or print media. The publications can overlap partially or completely, such that a similar portion, major component(s), or complete representation of a previously/simultaneous ly or future published study is duplicated.
SALAMI SLICING: The slicing of research publication that would form one meaningful paper into several different papers is known as salami publication or salami slicing. Unlike duplicate publication, which involves reporting the exact same data in two or more publications, salami slicing involves breaking up or segmenting a large study into two or more publications. These segments are called slices of a study. As a general rule, as long as the slices of a broken-up study share the same hypotheses, population, and methods, this is not acceptable in general practice. The same slice should never be published more than once at all. According to the United States Office of Research Integrity (USORI), salami slicing can result in a distortion of the literature/findings by leading unsuspecting readers to believe that data presented in each salami slice (journal article) is derived from a different subject sample/source. Somehow this practice not only skews the scientific database but it creates repetition to waste reader's time as well as the time of editors and peer reviewers, who must also handle each paper separately.
Elsevier's Scopus.com upgraded the Journal Analyzer with Source Normalized Impact per Paper (SNIP), which measures a source's contextual impact, and SCImago Journal Rank (SJR), which measures the scientific prestige of scholarly sources.
These indicators will be applied to all journals indexed by Scopus and will be freely available to both subscribers and non-subscribers @ scopus.com and www.journalmetrics.com
This document discusses redundant publication in research. It defines redundant publication as publishing the same or similar research findings in multiple papers without proper attribution. There are three main types: duplicate publication, salami slicing, and overlapping publications. Redundant publication undermines research integrity and credibility. It can damage researchers' reputations and lead to retractions or penalties. The document provides guidelines from organizations like COPE and ICMJE for ethical publishing practices to avoid redundant publication.
Presentation on journal suggestion tool and journal findershilpasharma203749
This document discusses journal finding and suggestion tools that can help researchers identify appropriate journals to publish their articles. It defines what academic journals are and their purpose. It then describes several online tools, like Edanz Journal Selector, Elsevier Journal Finder, EndNote Manuscript Matcher, and Springer Journal Suggester, that use keywords, titles, and abstracts to match articles to relevant journals based on the journal's scope, audience, and other factors. The document advises researchers to verify a journal's aims and author instructions before submitting to ensure their article is a good fit.
Impact factor of Journal as per Journal citation report, SNIP, SJR, IPP, Cite...Omprakash saini saini
The document discusses several metrics for evaluating journals:
- Cite Score measures citations received over a 3-year period divided by number of published items in Scopus.
- Impact Factor from Journal Citation Reports measures average citations over a 2-year period.
- SNIP accounts for differences in citation behavior between fields using a source normalization approach.
- SJR measures influence based on weighted citations from prestigious journals over 3 years.
- Impact per Publication calculates citations in a year divided by number of publications in the prior 3 years.
Open Access (OA) is a system provide access to knowledge resources with free of cost and other restrictions. This PPT answer to the questions what, why, types, benefits etc. and also describes the creative commons licensing, concept of predatory journals, open access journals, and Sharpa RoMeO.
This document provides an overview of various bibliometric products and metrics that can be used to measure research impact, including journal impact factor, h-index, citation counts, and journal/article ranking tools from Journal Citation Reports, Scopus, and Google Scholar. It discusses the purpose and calculations of metrics like impact factor, eigenfactor, and source normalized impact per paper (SNIP). It also covers limitations of bibliometrics and recommends using multiple metrics and tools to evaluate research. Exercises are provided to help understand how to analyze journals, articles, and individual researchers using different bibliometric resources.
The document discusses publication ethics, including defining authorship, avoiding plagiarism and fabrication, managing conflicts of interest, and addressing misconduct. It introduces guidelines from organizations like COPE and WAME that provide best practices for publication ethics. Adhering to ethical standards is important to ensure high-quality scientific research and public trust in findings. Journals have processes to identify and handle cases of unethical behavior.
Redundant, Duplicate and Repetitive publications are the most important concerns in the scientific research/literature writing. The occurrence of redundancy affects the concepts of science/literature and carries with it sanctions of consequences. To define this issue is much challenging because of the many varieties in which one can slice, reformat, or reproduce material from an already published study. This issue also goes beyond the duplication of a single study because it might possible that the same or similar data can be published in the early, middle, and later stages of an on-going study. This may have a damaging impact on the scientific study/literature base. Similar to slicing a cake, there are so many ways of representing a study or a set of data/information. We can slice a cake into different shapes like squares, triangles, rounds, or layers. Which of these might be the best way to slice a cake? Unfortunately, this may be the wrong question. The point is that the cake that is being referred to, the data/ information set or the study/findings, should not be sliced at all. Instead, the study should be presented as a whole to the readership to ensure the integrity of science/technology because of the impact that may have on patients who will be affected by the information contained in the literature/findings. Redundant, duplicate, or repetitive publications occur when there is representation of two or more studies, data sets, or publications in either electronic or print media. The publications can overlap partially or completely, such that a similar portion, major component(s), or complete representation of a previously/simultaneous ly or future published study is duplicated.
SALAMI SLICING: The slicing of research publication that would form one meaningful paper into several different papers is known as salami publication or salami slicing. Unlike duplicate publication, which involves reporting the exact same data in two or more publications, salami slicing involves breaking up or segmenting a large study into two or more publications. These segments are called slices of a study. As a general rule, as long as the slices of a broken-up study share the same hypotheses, population, and methods, this is not acceptable in general practice. The same slice should never be published more than once at all. According to the United States Office of Research Integrity (USORI), salami slicing can result in a distortion of the literature/findings by leading unsuspecting readers to believe that data presented in each salami slice (journal article) is derived from a different subject sample/source. Somehow this practice not only skews the scientific database but it creates repetition to waste reader's time as well as the time of editors and peer reviewers, who must also handle each paper separately.
Elsevier's Scopus.com upgraded the Journal Analyzer with Source Normalized Impact per Paper (SNIP), which measures a source's contextual impact, and SCImago Journal Rank (SJR), which measures the scientific prestige of scholarly sources.
These indicators will be applied to all journals indexed by Scopus and will be freely available to both subscribers and non-subscribers @ scopus.com and www.journalmetrics.com
This document discusses redundant publication in research. It defines redundant publication as publishing the same or similar research findings in multiple papers without proper attribution. There are three main types: duplicate publication, salami slicing, and overlapping publications. Redundant publication undermines research integrity and credibility. It can damage researchers' reputations and lead to retractions or penalties. The document provides guidelines from organizations like COPE and ICMJE for ethical publishing practices to avoid redundant publication.
This document provides information about indexing databases and citation databases. It defines a database as a collection of organized information that can be easily accessed and updated. Indexing databases are described as optimizing database performance by minimizing disk accesses during queries through the use of indexes. The document outlines different types of indexing, including clustered, non-clustered, and multi-level indexing. It then defines citation databases as collections of referenced academic works that can be used to evaluate publications by counting citations. The benefits of using citation databases over general search engines are discussed.
The document discusses author level metrics and how they are used to measure the impact of individual authors. It defines author level metrics as citation metrics that measure the bibliometric impact of individual researchers. It also discusses different types of author level metrics, including article-level metrics, journal-level metrics, h-index, i10-index, g-index, and altmetrics. Finally, it discusses tools that can be used to measure author metrics, such as Google Scholar, Web of Science, Scopus, and Publish or Perish.
In academia, the pressure to publish is high and the competition intense. This can lead authors to follow unethical publication practices, such as salami slicing, duplicate publication, and simultaneous submission. This slide deck explains these malpractices and shares tips on how authors can avoid them.
Ethical research and publication practices are essential for honest scholarly and scientific research. Most journals today are keenly aware of this: they publish policies on these issues and expect authors to “be aware of, and comply with, best practice in publication ethics”.This article discusses two widespread and related publishing practices that are considered unethical—duplicate publication and simultaneous submission. It draws on definitive international publication ethics guidelines.
This document discusses the concept of "salami publication" or "salami slicing" which refers to splitting up the results of a single research study into multiple smaller publications. This allows researchers to artificially inflate their publication counts but it is considered questionable and unethical. Key points made include that salami publication misleads readers and reviewers by counting the same data multiple times, distorts the results of meta-analyses, and gives undeserved credit and benefits to authors. While small publishable units can help early career researchers, relying too heavily on this strategy is not looked upon favorably for career advancement. The document also provides recommendations for transparently reporting additional findings from a previously published study.
Sherpa provides two tools - SHERPA/RoMEO and SHERPA/FACT - to help researchers comply with open access mandates from their funders. SHERPA/RoMEO allows users to search publisher and journal policies on copyright and self-archiving. SHERPA/FACT combines RoMEO and JULIET data to indicate a journal's open access compliance based on the user's selected funder and publication stage. Both tools aim to help unlock the potential of research by facilitating open access.
This document discusses publication ethics and outlines guidelines for ethical publishing. It begins by defining publication and the key parties involved - authors, editors, peer reviewers, and publishers. Authors should contribute significantly to the work and properly attribute contributions from others. Unethical practices include guest and gift authorships, plagiarism, and research fraud through fabrication or falsification of data. Conflicts of interest should be disclosed. Predatory journals are identified as having questionable standards and practices aimed at profit rather than quality. UGC works to identify and remove predatory journals from their listings to help researchers identify legitimate publication options. Overall the document provides guidance on ethical authorship, reviewing, editing and publishing of research.
This document provides information about open access publishing. It defines open access as providing online access to scientific information that is free of charge and free of most copyright restrictions. Some key benefits of open access mentioned are increased visibility, promotion of research efficiency, and public availability of publicly funded research results. The document discusses various approaches to open access such as the green road of self-archiving and the gold road of publishing in open access journals. It also outlines some barriers to open access like financial costs and legal issues regarding intellectual property rights. Overall, the document provides a high-level overview of open access publishing models and initiatives.
Publication ethics: Definitions, Introduction and ImportanceVasantha Raju N
The document provides an overview of publication ethics and discusses its importance. It defines publication ethics as the principles and standards associated with publishing scientific research results. This includes giving proper credit and authorship, avoiding plagiarism and duplicate publication, managing conflicts of interest, and not falsifying or fabricating research data. The document highlights various unethical practices like plagiarism, gift authorship, and predatory journals. It also discusses guidelines from organizations like COPE, ICMJE and reporting standards to promote ethical research practices.
Selective reporting and misrepresentation of data can lead to false conclusions and flawed decision making. Selective reporting involves intentionally presenting only information that supports a viewpoint while ignoring contradictory information. Misrepresentation distorts data to create a false impression. Both can occur through publication bias, outcome reporting bias, data dredging, spin, selective citation, data falsification, cherry-picking, manipulation, misinterpretation, and omission. Researchers should ensure transparency, present all relevant data, use appropriate statistical methods, verify sources, obtain independent review, follow ethical standards, and acknowledge limitations to avoid these issues.
The document discusses the history and development of open access initiatives for scholarly publications. It notes several important declarations from 2002-2005 that supported open access, including making publications freely available online. It describes how open access initiatives aim to unite organizations in supporting free and unrestricted access to peer-reviewed research. The document also discusses definitions of open access, copyright considerations, launching open access journals, and the Budapest Open Access Initiative of 2002.
This document summarizes a virtual workshop on thesis writing and publication organized by Lavender Literacy Club and Cape Comorin Trust in collaboration with other institutions. It discusses research metrics, which are quantitative measures used to assess scholarly research outputs and impacts. Various metrics are explained, including journal metrics like impact factor, author metrics like h-index, and alternative metrics. The importance of research profiles, publishing ethics, and increasing research visibility and impacts are also covered.
Predatory Publications and Software Tools for IdentificationSaptarshi Ghosh
Journals that publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not. These journals and publishers cheapen intellectual work by misleading scholars, preying particularly early career researchers trying to gain an edge. The credibility of scholars duped into publishing in these journals can be seriously damaged by doing so. It is important that as a scholarly community we help to protect each other from being taken advantage of in this way.
Violation of publication ethics can take several forms, including data manipulation, duplicate publication, simultaneous submission, plagiarism, and salami slicing. Upholding publication ethics is important to establish the integrity and credibility of scholarly research. It is the responsibility of authors to avoid fabricating or manipulating data, plagiarizing, submitting manuscripts to multiple journals simultaneously, or including guest authors who did not meaningfully contribute. Organizations like COPE and ICMJE provide guidelines to help authors, editors, and reviewers maintain high standards of ethical publication practices.
I explain plainly what is salami silcing, a practice of fragmenting single research into as many publications as possible. Salami publishing and hazards
This document provides an overview of citation indexing and describes some key tools and concepts. Citation indexing traces the use of ideas across research by identifying papers that cite older publications. The Institute for Scientific Information pioneered citation indexing databases like the Web of Science. While comprehensive, the WoS has limitations in coverage of non-English language and developing world journals. The Indian Citation Index was created to index more Indian publications and support research evaluation in India. Impact factors are calculated based on citations in the Journal Citation Reports to measure journal influence.
Intellectual honesty and research integrity abu salehAbuSaleh51
The document discusses intellectual honesty and research integrity. It defines intellectual honesty as an unbiased approach to problem solving that avoids omitting relevant facts or twisting information. Signs of intellectual honesty include acknowledging weaknesses in arguments and being willing to acknowledge alternative viewpoints and mistakes. Research integrity means conducting research through sound and honest methods so that others can trust the findings. It involves principles like honesty, reliability, impartiality, and care for participants. The document emphasizes transparency, open communication, and high ethical standards in all aspects of the research process.
In this presentation, the speaker has covered following topics:
What is scientific conduct?
What do we mean by ethics in research? – scientific temperament –
What is Ethical behavior in research?
How to practice Ethics in publication?
On Research Metrics -
Author level metrics to journal level metrics
Research Profile Digital Platforms.
The document discusses various citation databases and research metrics used to evaluate scholarly publications and researchers. It describes major citation databases like Web of Science, Scopus, and Google Scholar that compile citations from bibliographies. It also explains common research metrics like the Impact Factor, h-index, g-index, i10 Index, Cite Score, SJR, and SNIP used to measure the influence and impact of publications and researchers. These metrics are calculated based on factors like the number of citations a publication or researcher receives.
Els lc metrics_reference_cards_v1.0_slides_2016Jenny Delasalle
Each slide covers one of a selection of metrics, with definitions and information about how it might be used. This is just part of a suite of resources from https://libraryconnect.elsevier.com/metrics
This document provides information about indexing databases and citation databases. It defines a database as a collection of organized information that can be easily accessed and updated. Indexing databases are described as optimizing database performance by minimizing disk accesses during queries through the use of indexes. The document outlines different types of indexing, including clustered, non-clustered, and multi-level indexing. It then defines citation databases as collections of referenced academic works that can be used to evaluate publications by counting citations. The benefits of using citation databases over general search engines are discussed.
The document discusses author level metrics and how they are used to measure the impact of individual authors. It defines author level metrics as citation metrics that measure the bibliometric impact of individual researchers. It also discusses different types of author level metrics, including article-level metrics, journal-level metrics, h-index, i10-index, g-index, and altmetrics. Finally, it discusses tools that can be used to measure author metrics, such as Google Scholar, Web of Science, Scopus, and Publish or Perish.
In academia, the pressure to publish is high and the competition intense. This can lead authors to follow unethical publication practices, such as salami slicing, duplicate publication, and simultaneous submission. This slide deck explains these malpractices and shares tips on how authors can avoid them.
Ethical research and publication practices are essential for honest scholarly and scientific research. Most journals today are keenly aware of this: they publish policies on these issues and expect authors to “be aware of, and comply with, best practice in publication ethics”.This article discusses two widespread and related publishing practices that are considered unethical—duplicate publication and simultaneous submission. It draws on definitive international publication ethics guidelines.
This document discusses the concept of "salami publication" or "salami slicing" which refers to splitting up the results of a single research study into multiple smaller publications. This allows researchers to artificially inflate their publication counts but it is considered questionable and unethical. Key points made include that salami publication misleads readers and reviewers by counting the same data multiple times, distorts the results of meta-analyses, and gives undeserved credit and benefits to authors. While small publishable units can help early career researchers, relying too heavily on this strategy is not looked upon favorably for career advancement. The document also provides recommendations for transparently reporting additional findings from a previously published study.
Sherpa provides two tools - SHERPA/RoMEO and SHERPA/FACT - to help researchers comply with open access mandates from their funders. SHERPA/RoMEO allows users to search publisher and journal policies on copyright and self-archiving. SHERPA/FACT combines RoMEO and JULIET data to indicate a journal's open access compliance based on the user's selected funder and publication stage. Both tools aim to help unlock the potential of research by facilitating open access.
This document discusses publication ethics and outlines guidelines for ethical publishing. It begins by defining publication and the key parties involved - authors, editors, peer reviewers, and publishers. Authors should contribute significantly to the work and properly attribute contributions from others. Unethical practices include guest and gift authorships, plagiarism, and research fraud through fabrication or falsification of data. Conflicts of interest should be disclosed. Predatory journals are identified as having questionable standards and practices aimed at profit rather than quality. UGC works to identify and remove predatory journals from their listings to help researchers identify legitimate publication options. Overall the document provides guidance on ethical authorship, reviewing, editing and publishing of research.
This document provides information about open access publishing. It defines open access as providing online access to scientific information that is free of charge and free of most copyright restrictions. Some key benefits of open access mentioned are increased visibility, promotion of research efficiency, and public availability of publicly funded research results. The document discusses various approaches to open access such as the green road of self-archiving and the gold road of publishing in open access journals. It also outlines some barriers to open access like financial costs and legal issues regarding intellectual property rights. Overall, the document provides a high-level overview of open access publishing models and initiatives.
Publication ethics: Definitions, Introduction and ImportanceVasantha Raju N
The document provides an overview of publication ethics and discusses its importance. It defines publication ethics as the principles and standards associated with publishing scientific research results. This includes giving proper credit and authorship, avoiding plagiarism and duplicate publication, managing conflicts of interest, and not falsifying or fabricating research data. The document highlights various unethical practices like plagiarism, gift authorship, and predatory journals. It also discusses guidelines from organizations like COPE, ICMJE and reporting standards to promote ethical research practices.
Selective reporting and misrepresentation of data can lead to false conclusions and flawed decision making. Selective reporting involves intentionally presenting only information that supports a viewpoint while ignoring contradictory information. Misrepresentation distorts data to create a false impression. Both can occur through publication bias, outcome reporting bias, data dredging, spin, selective citation, data falsification, cherry-picking, manipulation, misinterpretation, and omission. Researchers should ensure transparency, present all relevant data, use appropriate statistical methods, verify sources, obtain independent review, follow ethical standards, and acknowledge limitations to avoid these issues.
The document discusses the history and development of open access initiatives for scholarly publications. It notes several important declarations from 2002-2005 that supported open access, including making publications freely available online. It describes how open access initiatives aim to unite organizations in supporting free and unrestricted access to peer-reviewed research. The document also discusses definitions of open access, copyright considerations, launching open access journals, and the Budapest Open Access Initiative of 2002.
This document summarizes a virtual workshop on thesis writing and publication organized by Lavender Literacy Club and Cape Comorin Trust in collaboration with other institutions. It discusses research metrics, which are quantitative measures used to assess scholarly research outputs and impacts. Various metrics are explained, including journal metrics like impact factor, author metrics like h-index, and alternative metrics. The importance of research profiles, publishing ethics, and increasing research visibility and impacts are also covered.
Predatory Publications and Software Tools for IdentificationSaptarshi Ghosh
Journals that publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not. These journals and publishers cheapen intellectual work by misleading scholars, preying particularly early career researchers trying to gain an edge. The credibility of scholars duped into publishing in these journals can be seriously damaged by doing so. It is important that as a scholarly community we help to protect each other from being taken advantage of in this way.
Violation of publication ethics can take several forms, including data manipulation, duplicate publication, simultaneous submission, plagiarism, and salami slicing. Upholding publication ethics is important to establish the integrity and credibility of scholarly research. It is the responsibility of authors to avoid fabricating or manipulating data, plagiarizing, submitting manuscripts to multiple journals simultaneously, or including guest authors who did not meaningfully contribute. Organizations like COPE and ICMJE provide guidelines to help authors, editors, and reviewers maintain high standards of ethical publication practices.
I explain plainly what is salami silcing, a practice of fragmenting single research into as many publications as possible. Salami publishing and hazards
This document provides an overview of citation indexing and describes some key tools and concepts. Citation indexing traces the use of ideas across research by identifying papers that cite older publications. The Institute for Scientific Information pioneered citation indexing databases like the Web of Science. While comprehensive, the WoS has limitations in coverage of non-English language and developing world journals. The Indian Citation Index was created to index more Indian publications and support research evaluation in India. Impact factors are calculated based on citations in the Journal Citation Reports to measure journal influence.
Intellectual honesty and research integrity abu salehAbuSaleh51
The document discusses intellectual honesty and research integrity. It defines intellectual honesty as an unbiased approach to problem solving that avoids omitting relevant facts or twisting information. Signs of intellectual honesty include acknowledging weaknesses in arguments and being willing to acknowledge alternative viewpoints and mistakes. Research integrity means conducting research through sound and honest methods so that others can trust the findings. It involves principles like honesty, reliability, impartiality, and care for participants. The document emphasizes transparency, open communication, and high ethical standards in all aspects of the research process.
In this presentation, the speaker has covered following topics:
What is scientific conduct?
What do we mean by ethics in research? – scientific temperament –
What is Ethical behavior in research?
How to practice Ethics in publication?
On Research Metrics -
Author level metrics to journal level metrics
Research Profile Digital Platforms.
The document discusses various citation databases and research metrics used to evaluate scholarly publications and researchers. It describes major citation databases like Web of Science, Scopus, and Google Scholar that compile citations from bibliographies. It also explains common research metrics like the Impact Factor, h-index, g-index, i10 Index, Cite Score, SJR, and SNIP used to measure the influence and impact of publications and researchers. These metrics are calculated based on factors like the number of citations a publication or researcher receives.
Els lc metrics_reference_cards_v1.0_slides_2016Jenny Delasalle
Each slide covers one of a selection of metrics, with definitions and information about how it might be used. This is just part of a suite of resources from https://libraryconnect.elsevier.com/metrics
The impact factor (IF) is a metric that measures the average number of citations received in a given year by articles published in a journal over the previous two years. Impact factors are calculated annually and published in the Journal Citation Reports to indicate the relative significance and influence of journals within their fields. While impact factors help identify influential research and select publication targets, they should not be the sole consideration and have limitations due to variability in disciplines, editorial policies, and self-citations. Alternatives to the IF include the h-index and Eigenfactor, which aim to provide more robust assessments of research influence and output.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluation.
Journal ranking metrices new perspective in journal performance managementAboul Ella Hassanien
The document discusses various metrics for evaluating journals and research, including impact factor, immediacy index, and the h-index. It provides definitions and explanations of how these metrics are calculated. For example, it explains that impact factor is calculated by dividing the number of citations in the current year by the total number of articles published in the previous two years. It also discusses some limitations and criticisms of solely relying on impact factor for evaluations.
Paradoxical betweenness in Academic endeavors and research metricsSaptarshi Ghosh
Publish or perish" is an aphorism describing the pressure to publish academic work in order to succeed in an academic career. ... The pressure to publish has been cited as a cause of poor work being submitted to academic journals.
Els lc metrics_reference_cards_v2.0_slides_dec2016Jenny Delasalle
Version 2 includes the new Citescore metric. I worked on the research behind these cards, but am not the copyright owner. Originals provided at https://libraryconnect.elsevier.com/articles/librarian-quick-reference-cards-research-impact-metrics
Quick reference cards for research impact metricsLibrary_Connect
When meeting with students, researchers, deans or department heads, the metrics on these quick reference cards can serve as a jumping off point in conversations about where to publish, adding to researcher profiles, enriching promotion and tenure files, and benchmarking research outputs. The cards were co-developed by librarian Jenny Delasalle and Elsevier's Library Connect program. Learn more and download poster versions as well at: https://libraryconnect.elsevier.com/articles/librarian-quick-reference-cards-research-impact-metrics
This document introduces two new journal metrics, SJR and SNIP, that have been endorsed by Elsevier's Scopus database. SJR measures journal prestige by weighting citations based on the status and reputation of the citing journal. SNIP accounts for differences in citation potential across research fields by normalizing a journal's raw citation impact based on the average citations in its subject field. The document compares the two new metrics to traditional journal impact factors and discusses their potential uses for publishers, librarians, and researchers to evaluate journal performance and research impact.
This document discusses bibliometric tools that can be used to analyze scholarly literature and research impact. It explains that bibliometrics involves the quantitative analysis of bibliographic items like citations, authors, and keywords. Individual researchers and institutions can use bibliometric tools to evaluate research impact, identify collaborators, analyze journal metrics, and inform hiring and funding decisions. It provides examples of bibliometric databases like Web of Science, Scopus, and Google Scholar that contain citation data and metrics. Finally, it notes some limitations of bibliometric indicators and the need to consider citation behaviors and contexts across disciplines.
The document provides information about the Journal Citation Reports (JCR) from Clarivate. It discusses what the JCR is, how it can be used by publishers, librarians, researchers and data scientists, and some of the metrics it includes like impact factor, immediacy index, and cited half-life. It also summarizes some strategies for publishing, including aiming for high ranked journals, journals that are cited for a long time or quickly, and internationally recognized or government accredited journals. Key points are that context is important when using metrics, and the JCR can help evaluate journals and find related publications.
The document provides information about the Journal Citation Reports (JCR) from Clarivate. It discusses what the JCR is, how it can be used by publishers, librarians, researchers and data scientists, and some of the metrics it includes like impact factor, immediacy index, and cited half-life. It also summarizes some strategies for publishing, including aiming for high ranked journals, journals that are cited for a long time or quickly, and internationally recognized or government accredited journals. Key points are that context is important when using metrics, and the JCR can help evaluate journals and find related publications.
The document discusses the Journal Impact Factor (JIF) and the new Journal Citation Indicator. It explains that the JIF is a ratio that divides a journal's received citations by the number of articles it published in the past two years, providing an approximation of the citation rate. The new Journal Citation Indicator uses a field-normalized calculation of citations over a three-year period to provide an easier to interpret measure that complements the JIF.
Modified CiteScore metric for reducing the effect of self-citationsTELKOMNIKA JOURNAL
Elsevier B.V. launched a scholarly metric called CiteScore (CS) on December 8, 2016. Up till
then, the journal impact factor (JIF) owned by Clarivate Analytics (Thomson Reuters) was the only trusted
metric for journal evaluation. As noted by Teixeira da Silva & Memon (2017), CS offers some observed
advantages over JIF. The potentials of CiteScore as a viable metric are still emerging. The paper briefly
introduces a variant of the CiteScore that can be used in quantifying the impact of researchers and their
institutions. The ultimate aim is to reduce the numerical effect of self-citations (SC) in academic publishing.
The reduction is designed to discourage SC but not diminishing it. The reasons for the adopted
methodology are discussed extensively. The proposed modified CiteScore metric is simple, transparent
and constructed to ensure integrity in academic publication. The result showed that the proposed modified
CiteScore is a better option than the traditional CiteScore and hence, can be applied in impact
determination, the ranking of authors and their institutions, and evaluation of scientists for a grant award.
The approach used in this paper is entirely new in two ways; first, a metric similar to journal ranking is
proposed for ranking authors and their institutions and secondly, disproportionate scores are awarded to
different sources of citations to reduce perceived dishonesty in academic publications. In conclusion, this
research is one of very few to report the effect of SC on CiteScore. Hitherto, the effect of SC has always
been on the journal impact factor (IF).
Journal and author impact measures Assessing your impact (h-index and beyond)Aboul Ella Hassanien
This seminar presented at faculty of Computers Monofiya university on Saturday 12 Dec. 2015. Seminar for researchers and graduate students at Egyptian universities to increase awareness of the importance of publication and scientific research and how to increase the researchers weight, its calculation, and calculation of magazines weight and how to calculate new weights that differ from the impact of the magazines and tips for students attic studies on how to increase citation of the published research papers and How to use open access publishing. In addition discuss the Issues in the field of open access including its advantages and disadvantages
Sole reliance on citation data provides an incomplete understanding of research. Although citation analysis may be simple to apply, it should be used with caution to avoid it coming under disrepute through uncritical use. Ideally, citation analysis should be performed to supplement, not replace, a robust system of expert review to determine the actual quality and impact of published research.
This document discusses factors to consider when choosing where to publish research, including journal quality metrics and indexing. It covers:
1. Tools to help find an appropriate journal, including JournalGuide, JournalFinder, and others that search databases like Web of Science, Scopus, and Springer.
2. Common journal quality metrics like Journal Impact Factor (JIF), CiteScore, SCImago Journal Rank (SJR), Eigenfactor, and SNIP, and when each may be most useful.
3. Author impact metrics like the h-index, m-index, and i10-index to evaluate individual researcher impact over time and career stage.
A tool for librarians to select metrics across the research lifecycleLibrary_Connect
These slides introduce a range of research impact metrics. They were presented at the ER&L Conference (April 2017) by Chris James, Product Manager Research Metrics, Elsevier.
Semelhante a Impact Factor Journals as per JCR, SNIP, SJR, IPP, CiteScore (20)
Human Capital and Growth For Development of Knowledge EconomySaptarshi Ghosh
The document discusses modern economic growth over the last millennium. It notes that world GDP grew 300-fold from 1000-2000, driven by population growth of 22-fold and per capita income growth of 13-fold. Growth rates increased substantially after 1820, with per capita income growing 8-fold and life expectancy rising from 24 to 66 years. While economic models have tried to explain changing growth rates, there is no consensus. The document also examines variability in growth rates over time and between countries, as well as convergence among wealthy nations.
Open and Distance Learning (ODL) can help achieve Sustainable Development Goals (SDGs) by promoting lifelong education for all. Education is critical to building knowledgeable societies and addressing societal issues like poverty and hunger. ODL allows education to reach large audiences anywhere and anytime. When combined with holistic and transformative pedagogy, ODL can develop skills like critical thinking needed to make progress on each SDG by raising awareness and empowering problem-solving. This includes increasing understanding of issues like health, gender equality, water sanitation, and more. ODL thus presents a useful tool for disseminating information to facilitate adoption of sustainable development practices.
The document discusses tools for online teaching. It begins by outlining 10 principles for effective online teaching. It then discusses learning management systems (LMS) which are used to manage course content, track students, and include administrative and reporting features. It also discusses multimedia authoring tools which can be used to create interactive content including page-based tools which organize elements as pages and time-based tools which organize along a timeline. Additional tools discussed include painting/drawing software, image editing, OCR, animation/movie software, and communication tools like email and online forums.
This document discusses software tools that can help with effective academic research, including writing tools like Microsoft Word and Google Docs, referencing tools like Zotero and Mendeley for managing citations, research tools such as REF-N-WRITE and Microsoft Excel, grammar checkers like Grammarly and ProWritingAid, open-source statistical tools like JASP, and network analysis software such as Pajek. The document provides an overview of different categories of software and lists examples of specific programs under each category to aid researchers in their work.
Writing Tools and Software, Referencing Tools and Reference Management Software, Research Tools and Software, Grammar Checkers and Sentence Correction Tools.
Selective Reporting and Misrepresentation of DataSaptarshi Ghosh
The document discusses various issues related to scientific misconduct including fabrication, falsification, and plagiarism. It notes that these practices undermine scientific progress and can cause harm. It defines fabrication as making up data/results and falsification as improperly changing or misreporting data. Plagiarism involves using ideas or words without proper attribution. Maintaining integrity in research is important for objective and reliable science.
Finding the Right Journal at the Right Time for the Right WorkSaptarshi Ghosh
JournalFinder helps you find journals that could be best suited for publishing your scientific article. Please also consult the journal’s Aims and Scope for further guidance. Ultimately, the Editor will decide on how well your article matches the journal.
The phrase new normal is an oxymoron typically used to indicate a life event that is out of the ordinary and has a long-lasting or permanent impact on someone’s day-to-day routine.
But using the phrase to describe efforts that makes me uncomfortable to fight a global pandemic implies a sense of permanence.
In her foreword to John Putzier, Weirdos in the Workplace: The New Normal—Thriving in the Age of the Individual (2004), Libby Sartain claims that the phrase “the new normal” is a recent coinage
Great wits are sure to madness near allied
And thin partitions do their bounds divide.
(John Dryden, 1681)
There is no great genius without a tincture of madness. (Seneca, 1st Century A.D.)
ALTMETRICS : A HASTY PEEP INTO NEW SCHOLARLY MEASUREMENTSaptarshi Ghosh
The term ‘Altmetrics’ was proposed by Jason Priem, a PhD student at the School of Information and Library Science at University of North Carolina, Chapel Hill through a tweet. [https://twitter.com/asnpriem/status/25844968813].
Altmetrics is the combination of two words such as: ‘Alternative’ and ‘Metrics’ in which the ‘alt-‘part refers to alternative types of metrics (that is alternative to traditional metrics such as citation analysis, impact factor, downloads & usage data etc.).
Altmetrics is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship (http://altmetrics.org/about/). It is the study of new indicators for the analysis of academic activity based on Web 2.0.
Management of Change is being relevant with the time and space. This presentation elaborates existence of information professionals beyond their territories as survival of the fittest lies only on more information diffusion and information dissemination for the collective wisdom of the stakeholders in a society
Will the Digital library sustain as a Social Capital for dissemination of Inf...Saptarshi Ghosh
Abstract
This paper deals with the relationship between digital library and social development. The core of digital library which rests with strong social bonding and participatory approach, has been reflected in this write-up. Today, global prosperity and individual productivity depend upon the ability to learn constantly, adapt to change readily, and to evaluate information critically. Right now in this information rich world, we must remain ways to transform information into knowledge. So, how can we ensure that our communities can access the resources and services that we have available? How can we ensure that we are responsive to, and representative of, our communities' actual, as opposed to perceived, needs? We will look at various ways that library services can partner with their communities to bring about better outcomes for all. The digital library can bridge these gaps and it may be turned as a people’s access to the information repository and can be a motivator to sustainable development.
Information System Design in Context of Social InformaticsSaptarshi Ghosh
Informatics is a branch of information engineering. It involves the practice of information processing and the engineering of information systems, and as an academic field it is an applied form of information science.
The field considers the interaction between humans and information alongside the construction of interfaces, organisations, technologies and systems.
“Organization Behaviour is concerned with the study of what people do in an organization and how that behaviour affects the performance of the organization.” (Robbins: 1989)
Library Intelligence The collection, analysis, and synthesis of data. Time devoted to reflection and development of insight Willingness and ability to change. Library Intelligence makes it easier for library staff to focus on improving their digital literacy fluency.
Information Ecology: Legacy Practices with changing dynamicsSaptarshi Ghosh
“The study of the inter-relationships between people, enterprises, technologies and the information environment” -The International Encyclopedia of Information and Library Science
Impact of Social Networking /Web 2.0 features in Library Management SoftwareSaptarshi Ghosh
Web 2.0 describes World Wide Web websites that emphasize user-generated content, usability (ease of use, even by non-experts), and interoperability (this means that a website can work well with other products, systems and devices) for end users. The term was popularized by Tim O'Reilly. Social networking sites like facebook, twitter, etc. are result of web 2.0.
Optimistic interpretations: ignoring social relations that influence the social distribution and impact of the new ICT. The new digital technologies function as commodities, and their distribution – at least initially – tends to follow existing divisions of class, race and gender. Rather than assisting with equalization, the new information and communication technologies tend to reinforce social inequality, and lead to the formation of socially and technologically disadvantaged and excluded individuals (Golding, 1996; Zappala, 2000).
COLLECTIVES OR SUBJUGATION: POLITICS OF MISINFORMATION Saptarshi Ghosh
If you assume that there is no hope, you guarantee that there will be no hope. If you assume
that there is an instinct for freedom, that there are opportunities to change things, then there
is a possibility that you can contribute to making a better world. That’s your choice (Chomsky 2002, p.6).
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Liberal Approach to the Study of Indian Politics.pdf
Impact Factor Journals as per JCR, SNIP, SJR, IPP, CiteScore
1. Impact factor
journals as per as
per journal
citation
report SNIP, SJR,
IPP, CiteScore
Dr. S.Ghosh
Associate Professor
Department of Library &
Information Science, University of
North Bengal, West Bengal 734013
2. Publish or
Perish?
“Publish or perish" is an aphorism
describing the pressure to publish academic
work in order to succeed in an academic
career. ... The pressure to publish has been
cited as a cause of poor work being
submitted to academic journals.
12/6/2020@sghoshnbu 2
3. The Harsh
Consequences of
“Publish or Perish”
The culture of “publish or perish” is clearly pervasive and
appears to be here to stay. Calls for instant distribution and
transparency of both authorship and peer review may help
to address problems with research quality, but as long as
researchers are threatened by the publication venue of their
research, the system will remain fundamentally broken.
12/6/2020@sghoshnbu 3
4. Download counts
Page views
Mentions in news reports
Mentions in social media
Mentions in blogs
Reference manager readers
… etc.
Journal Impact Factor
Citation counts
Perspectives of impact
ACADEMIC IMPACT SOCIETAL IMPACT
Alternative metrics
“altmetrics”
+
Traditional metricsTraditional metrics
More article-centric, as opposed to
journal-centric.
6. Why is
metrics?
Quantification of research
impact
Multidimensional Array of
Stakeholders
Calculations of fuzzy concepts
and associative activities
7. What are the
different
metrics?
Scholars have combined standard research
metrics, like scholarly output and citation
counts, into formulas to measure and assess
author and journal impact in new ways. Some
of these metrics include:
Journal Impact Factor
h-index
g-index
Eigenfactor score
Altmetric
@sghoshnbu 12/6/2020 7
8. Ways of
Measuring
Impact
Article Impact - citation count and analysis using Web of
Science and Google Scholar
Journal Impact - journal data and standard measures for
journals
Author Impact - common measures of author impact (h-
index) and other metrics scholars might encounter
Altmetrics - what are altmetrics? Altmetric badges and
altmetrics tools
Book and Book Chapter Impact - book citation counts,
holdings, book reviews and other qualitative indicators
Maximize Impact - unique researcher identifiers and profiles,
academic communities, and other strategies to maximize
impact
12/6/2020@sghoshnbu 8
9. Journal-level metrics
al-level metrics
Metrics have become a fact of life in many - if not all - fields of research and scholarship. In an age of
information abundance (often termed ‘information overload’), having a shorthand for the signals for
where in the ocean of published literature to focus our limited attention has become increasingly
important.
Research metrics are sometimes controversial, especially when in popular usage they become proxies
for multidimensional concepts such as research quality or impact. Each metric may offer a different
emphasis based on its underlying data source, method of calculation, or context of use. For this reason,
Elsevier promotes the responsible use of research metrics encapsulated in two “golden rules”. Those
are: always use both qualitative and quantitative input for decisions (i.e. expert
opinion alongside metrics), and always use more than one research metric as the quantitative input. This
second rule acknowledges that performance cannot be expressed by any single metric, as well as the
fact that all metrics have specific strengths and weaknesses. Therefore, using multiple complementary
metrics can help to provide a more complete picture and reflect different aspects of research productivity
and impact in the final assessment.
12/6/2020@sghoshnbu 9
10. Journal Citation Reports™ (JCR)
Journal Citation Reports™ (JCR) provides you with the transparent, publisher-neutral data and statistics you need to make
confident decisions in today’s evolving scholarly publishing landscape, whether you’re submitting your first manuscript or
managing a portfolio of thousands of publications.
Quickly understand a journal’s role within and influence upon the global research community by exploring a rich array of
citation metrics, including the Journal Impact Factor™ (JIF), alongside descriptive data about a journal’s open access
content and contributing authors.
Web of Science does not depend on the Journal Impact Factor alone in assessing the usefulness of a journal, and neither
should anyone else. The Journal Impact Factor should not be used without careful attention to the many phenomena that
influence citation rates – for example, the average number of references cited in the average article. The Journal Impact
Factor should be used with informed peer review. In the case of academic evaluation for tenure, it is sometimes
inappropriate to use the impact of the source journal to estimate the expected frequency of a recently published article.
Again, the Journal Impact Factor should be used with informed peer review. Citation frequencies for individual articles are
quite varied.
Journal Citation Reports now includes more article-level data to provide a clearer understanding of the reciprocal
relationship between the article and the journal. This level of transparency allows you to not only see the data, but also see
through the data to a more nuanced consideration of journal value.
12/6/2020@sghoshnbu 10
11. Journal Impact Factor (JIF)
Journal Impact Factor (JIF) is calculated by Clarivate Analytics as the average of
the sum of the citations received each year to a journal’s previous two years of
publications (linked to the journal, but not necessarily to specific publications)
divided by the sum of “citable” publications in the previous two years. Owing to
the way in which citations are counted in the numerator and the subjectivity of
what constitutes a “citable item” in the denominator, JIF has received sustained
criticism for many years for its lack of transparency and reproducibility and the
potential for manipulation of the metric. Available for only 11,785 journals
(Science Citation Index Expanded plus Social Sciences Citation Index, per
December 2019), JIF is based on an extract of Clarivate’s Web of Science
database and includes citations that could not be linked to specific articles in the
journal, so-called unlinked citations.
12/6/2020@sghoshnbu 11
12. Metrics in a nutshell(Impact Factor)
@sghoshnbu 12/6/2020 12
Impact Factor
Journal Citation
Reports
Use a two-year period to divide the
number of times articles were cited by
the number of articles that were
published
Example:
200 = the number of times articles
published in 2008 and 2009 were cited
by indexed journals during 2010.
73 = the total number of "citable
items" published in 2008 and 2009.
200/73 = 2.73
2010 impact factor
Impact factor reflects only on
how many citations on a
specific journal there are (on
average). A journal with a
high impact factor has articles
that are cited often.
13. Traditional metrics for journals
Impact Factor and Citation Counts, created to measure
Journals and journal articles
Scholarly (journal) impact
Initially created for librarians, then largely adopted by STEM
Image from Journal Citation Reports (library database)
Software
14. Source Normalized Impact per Paper (SNIP)
Source Normalized Impact per Paper (SNIP) is a sophisticated metric
that intrinsically accounts for field-specific differences in citation
practices. It does so by comparing each journal’s citations per
publication with the citation potential of its field, defined as the set of
publications citing that journal. SNIP therefore
measures contextual citation impact and enables direct comparison
of journals in different subject fields, since the value of a single
citation is greater for journals in fields where citations are less likely,
and vice versa. SNIP is calculated annually from Scopus data and is
freely available alongside CiteScore and SJR
at www.scopus.com/sources. Unlike the well-known journal impact
factor, SNIP corrects for differences in citation practices between
scientific fields, thereby allowing for more accurate between-field
comparisons of citation impact. Centre for Science and Technology
Studies(CWTS) Journal Indicators also provides stability intervals
that indicate the reliability of the SNIP value of a journal. SNIP was
created by Professor Henk F. Moed at Centre for Science and
Technology Studies (CWTS), University of Leiden.@sghoshnbu 12/6/2020 14
15. CiteScore metrics
CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading abstract
and citation database of peer-reviewed literature. CiteScore itself is an average of the sum of the
citations received in a given year to publications published in the previous three years divided by the
sum of publications in the same previous three years. CiteScore is calculated for the current year on a
monthly basis until it is fixed as a permanent value in May the following year, permitting a real-time
view on how the metric builds as citations accrue. Once fixed, the other CiteScore metrics are also
computed and contextualise this score with rankings and other indicators to allow comparison.
CiteScore metrics are:
Current: A monthly CiteScore Tracker keeps you up-to-date about latest progression towards the next annual
value, which makes next CiteScore more predictable.
Comprehensive: Based on Scopus, the leading scientific citation database.
Clear: Values are transparent and reproducible to individual articles in Scopus.
The scores and underlying data for more than 25,000 active journals, book series and conference
proceedings are freely available at www.scopus.com/sources or via a widget (available on each
source page on Scopus.com) or the Scopus API.
12/6/2020@sghoshnbu 15
16. SCImago Journal Rank (SJR)
SCImago Journal Rank (SJR) is based on the concept of a transfer of
prestige between journals via their citation links. Drawing on a similar
approach to the Google PageRank algorithm - which assumes that
important websites are linked to from other important websites - SJR
weights each incoming citation to a journal by the SJR of the citing
journal, with a citation from a high-SJR source counting for more than a
citation from a low-SJR source. Like CiteScore, SJR accounts for journal
size by averaging across recent publications and is calculated annually.
SJR is also powered by Scopus data and is freely available alongside
CiteScore at www.scopus.com/sources.
12/6/2020@sghoshnbu 16
17. The impact per publication(IPP)
The impact per publication, calculated as the number of citations given in
the present year to publications in the past three years divided by the total
number of publications in the past three years. IPP is fairly similar to the
well-known journal impact factor. Like the journal impact factor, IPP does
not correct for differences in citation practices between scientific fields.
IPP was previously known as RIP (raw impact per publication).
12/6/2020@sghoshnbu 17
18. Immediacy Index
The Immediacy Index measures how frequently the average article from a
journal is cited within the same year as publication. This number is useful
for evaluating journals that publish cutting-edge research.
Immediacy Index Numerator - Cites to recent items:
The numerator looks at citations in a particular JCR year to a journal's
content from the same year. For example, the 2015 Immediacy Index for a
journal would take into account 2015 citations to the journal's 2015 papers.
The numerator includes citations to anything published by the journal in that
year.
Immediacy Index Denominator - Number of recent items:
The denominator takes into account the number of citable items published
in the journal in 2015. Citable items include articles and reviews.
@sghoshnbu 12/6/2020 18
20. H-index variant H5-Index
@sghoshnbu 12/6/2020 20
h-index
Web of
Science, Google
Scholar, Scopus
1) Create a list of all your publications. Put the list in descending order based
on the number of times it was cited (you can get this information from any
of the sources to the left). The first article should have the most citations. Go
through and number these.
2) Look down through the list to figure out at what point the number of
times a publication has been cited is equal to or larger than the line (or
paper) number of the publication.
Example:
Paper Number # of citations
1 13
2 7
3 4
h-index= 3
*please remember that many databases will give you this number; this is
only if you'd like to calculate it manually. You can also often find calculators
online.
The h-index focuses more
specifically on the impact
of only one scholar instead
of an entire journal. The
higher the h-index, the
more scholarly output a
researcher has.
SoftwareJorge E. Hirsch
Argentine American professor of physics at the University of California,
San Diego.[1] He is known for inventing the h-index in 2005
21. G-index
@sghoshnbu 12/6/2020 21
g-index Harzing's Publish or Perish
Given a list of articles ranked
in decreasing order of the
number citations that they
received, the g-index is the
largest unique number to the
extent that the top g articles
received together is at least
g
2
citations.
The g-index can be thought of
as a continuation of the h-index.
The difference is that this index
puts more weight on highly-
cited citations. The g-index was
created because scholars
noticed that h-index ignores the
number of citations to each
individual article beyond what is
needed to achieve a certain h-
index. This number often
complements the h-index and
isn't necessarily a replacement.
Egghe, Leo
Hasselt University, Nederlands in 2006 suggested
g-index
23. Eigenfactor score
@sghoshnbu 12/6/2020 23
Eigenfactor score Eigenfactor.org
• The Eigenfactor score is calculated by
eigenfactor.org.
• However, their process is very similar to
calculating impact factor and they pull
their data from the JCR as well.
• The major difference is that the
Eigenfactor score deletes references
from one article in a journal to another
in the same journal.
• This eliminates the problem of self-
citing.
• The Eigenfactor score is also a five-year
calculation.
• More information can be found
through Journal Citation Reports.
A high Eigenfactor score signals
that the journal does not self-
cite and controls the network of
that discipline. It's useful to look
at scholar's h-index as well as
the Eigenfactor score of the
journals they publish in in order
to get a broad sense of their
impact as a researcher.
Jevin West Carl T. Bergstrom Ted C. Bergstrom
Ben Althouse
24. i10-index
The i10-index is used by Google
Scholar and indicates the
number of publications that have
been cited at least 10 times.
12/6/2020@sghoshnbu 24
25. Altmetrics
Jason Priem
The tweet by Jason Priem,
which coined the
term altmetrics.
The term "altmetrics" (alternative metrics) is used to describe
approaches to measure the impact of scholarship by using new
social media tools such as bookmarks, links, blog postings,
inclusion in citation management tools, mentions and tweets to
measure the importance of scholarly output.
Proponents of altmetrics believe that using altmetrics will help
measure the impact of an article in a more comprehensive and
objective way than was done with more traditional scholarly
impact measures such as journal impact factor. However, there
are limits to this approach and caution should be used to not rely
on any one particular measure in evaluating the importance of
scholarship.
12/6/2020@sghoshnbu 25
26. “The Umbrella
Classification of
Non-Citation
based Metrics”
“alternative metrics”
• new ways of measuring different, non-traditional
forms of impact.
• “alternative to only using citations”, not
“alternative to citations”.
• complementary to traditional citation-based
analysis.
Article-level metrics have come to refer to
any metrics (e.g., including altmetrics) that
surround a scholarly article.
27. An article-centric approach
Measure online attention surrounding journal articles (and datasets).
Collect and deliver article-level metrics to journal publishers.
30. How do we collect
data for altmetrics?
Directly from the individual tools
From publishers (views, download data)
From (some) library databases
From scholarly networks
Through aggregating tools
SlideShare views
PLOS article metrics
Web of Science usage
ResearchGate metrics
Altmetric metrics
31. Altmetrics
Measures
12/6/2020 @sghoshnbu 31
Usage : clicks, downloads, views; Social Media - likes, shares, or tweets;
Captures - bookmarks, favorites, followers; Mentions - blog posts,
reviews, comments, or ratings
Altmetrics are often used to measure the impact of gray literature or
materials that are not formally published, such as posters and working
papers. They can also be used to provide more information about the
reach of published articles and books.
It is unlikely that altmetrics will supplant traditional metrics as the
measure of research impact. However, altmetrics can demonstrate the
reach and interest in a topic from the public, practitioners, and policy
makers
Authors should refrain from judging the impact of a work based on the
altmetrics numbers. Digging into who is saying what about the work
may provide more reliable information about the quality and influence
of a work.
38. Strategies to
Maximize
Your Impact
12/6/2020 @sghoshnbu 38
Create Unique
Researcher Identifiers
Create Researcher
Profiles
Share Your Research
Online
Take Steps to Broaden
Your Impact
39. Take Steps
to Broaden
Your Impact
Contribute
Contribute to Wikipedia, either in a new entry or in the text and
references of an existing entry.
Discuss Discuss your research findings on a blog or through Twitter.
Link Link your most recent research to your email signature.
Publish in
Publish in open access journals or pay to have the work available
open access in a subscription journal.
Craft
Craft a work's title and abstract carefully. Repeat keywords so the
work is highly relevant in search engines.
Add
Add postprints/white papers/drafts of work to your institutional
repository, DigitalCommons@EMU, or to a disciplinary repository.
12/6/2020@sghoshnbu 39
40. Identity
Exploration
Google Scholar Profile
A Google Scholar Profile tracks your publications listed in Google Scholar,
provides the number of citations and links to the items citing your work, and
calculates your h-index. (Note: You need to have a Gmail account to track
your profile. Once you are logged in to your Gmail account, click on "My
citations" to view and edit your profile.)
Impactstory
This web-based service collects metrics and displays them with a link that
can be added to CVs. Join free with an ORCID account.
Share Your Research Online
The process of writing for publication often creates several outputs in
addition to the final journal article, book, or book chapter. Consider posting
slides from presentations, brief videos of presentations, data sets, or other
materials online with a link to the official publication.
Postprints/White Papers/Drafts of work - DigitalCommons@EMU or
subject/disciplinary repositories.
Presentation Slides - SlideShare or Speaker Deck
Videos - Vimeo or YouTube
Data Sets - Dryad or figshare (figshare can handle other outputs as well)
Code & Software - GitHub
@sghoshnbu 12/6/2020 40
42. References
Ayris, P., López de San Román, A., Maes, K., & Labastida, I. (2018). Open Science and its role in universities : A roadmap for cultural
change. League of European Research Universities.
Bose, R. (2004). Knowledge management metrics. Industrial Management and Data Systems. https://doi.org/10.1108/02635570410543771
Commission, E. (2017). Next generation metrics: Responsible metrics and evaluation for open science: European commission Report.
Brussels.
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature.
https://doi.org/10.1038/520429a
Lăzăroiu, G. (2017). What do altmetrics measure? Maybe the broader impact of research on society. Educational Philosophy and Theory.
https://doi.org/10.1080/00131857.2016.1237735
LibGuides: Introduction to Impact Factor and Other Research Metrics: Home. (n.d.). Retrieved from
https://guides.library.illinois.edu/impact
SAGE Publishing. (2019). The latest thinking about metrics for research
impact in the social sciences (White paper). Thousand Oaks, CA: Author. doi: 10.4135/wp190522.
Understanding research metrics. (n.d.). Retrieved May 17, 2020, from https://editorresources.taylorandfrancis.com/understanding-
research-metrics/
12/6/2020@sghoshnbu 42
It’s easy to dismiss publish or perish as an old maxim that academics use to complain about their terrible working conditions, but research has shown that the longer this culture of pressure persists, the greater the risk to academic research integrity. As the players in this publishing game start to suffer, and the cracks begin to appear, we can see real consequences:
Focus has been shifting to metrics at the article level. Why should the value of a work be judged the journal it has been published in?