This might include the citation of a piece of research in policy documents or reference to a piece of research being cited within the media. The Value of Public Sector R&D, Assessing impacts of higher education systems, National Co-ordinating Centre for Public Engagement, Through a Glass, Darkly: Measuring the Social Value of Universities, Describing the Impact of Health Research: A Research Impact Framework, LSE Public Policy Group. Collating the evidence and indicators of impact is a significant task that is being undertaken within universities and institutions globally. Gathering evidence of the links between research and impact is not only a challenge where that evidence is lacking. This raises the questions of whether UK business and industry should not invest in the research that will deliver them impacts and who will fund basic research if not the government? The transition to routine capture of impact data not only requires the development of tools and systems to help with implementation but also a cultural change to develop practices, currently undertaken by a few to be incorporated as standard behaviour among researchers and universities. It is perhaps assumed here that a positive or beneficial effect will be considered as an impact but what about changes that are perceived to be negative? It can be seen from the panel guidance produced by HEFCE to illustrate impacts and evidence that it is expected that impact and evidence will vary according to discipline (REF2014 2012). To achieve compatible systems, a shared language is required. One way in which change of opinion and user perceptions can be evidenced is by gathering of stakeholder and user testimonies or undertaking surveys. The criteria for assessment were also supported by a model developed by Brunel for measurement of impact that used similar measures defined as depth and spread. To understand the socio-economic value of research and subsequently inform funding decisions. (2005), Wooding et al. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. Prague, Czech Republic, Health ResearchMaking an Impact. In 200910, the REF team conducted a pilot study for the REF involving 29 institutions, submitting case studies to one of five units of assessment (in clinical medicine, physics, earth systems and environmental sciences, social work and social policy, and English language and literature) (REF2014 2010). Classroom Assessment -- (sometime referred to as Course-based Assessment) - is a process of gathering data on student learning during the educational experience, designed to help the instructor determine which concepts or skills the students are not learning well, so that steps may be taken to improve the students' learning while the course is 0000342937 00000 n Figure 1, replicated from Hughes and Martin (2012), illustrates how the ease with which impact can be attributed decreases with time, whereas the impact, or effect of complementary assets, increases, highlighting the problem that it may take a considerable amount of time for the full impact of a piece of research to develop but because of this time and the increase in complexity of the networks involved in translating the research and interim impacts, it is more difficult to attribute and link back to a contributing piece of research. Professor James Ladyman, at the University of Bristol, a vocal adversary of awarding funding based on the assessment of research impact, has been quoted as saying that inclusion of impact in the REF will create selection pressure, promoting academic research that has more direct economic impact or which is easier to explain to the public (Corbyn 2009). The current definition of health, formulated by the WHO, is no longer adequate for dealing with the new challenges in health care systems. These sometimes dissim- ilar views are due to the varied training and background of the writers in terms of their profession, concerned with different aspects of the education process. Where quantitative data were available, for example, audience numbers or book sales, these numbers rarely reflected the degree of impact, as no context or baseline was available. Measurement assessment and evaluation also enables educators to measure the skills, knowledge, beliefs, and attitude of the learners. These techniques have the potential to provide a transformation in data capture and impact assessment (Jones and Grant 2013). Accountability. One of these, the RQF, they identified as providing a promising basis for developing an impact approach for the REF using the case study approach. However, it must be remembered that in the case of the UK REF, impact is only considered that is based on research that has taken place within the institution submitting the case study. , . Impact is derived not only from targeted research but from serendipitous findings, good fortune, and complex networks interacting and translating knowledge and research. 0000002318 00000 n Recommendations from the REF pilot were that the panel should be able to extend the time frame where appropriate; this, however, poses difficult decisions when submitting a case study to the REF as to what the view of the panel will be and whether if deemed inappropriate this will render the case study unclassified. 0000008241 00000 n The origin is from the Latin term 'valere' meaning "be strong, be well; be of value, or be worth". Providing advice and guidance within specific disciplines is undoubtedly helpful. If this research is to be assessed alongside more applied research, it is important that we are able to at least determine the contribution of basic research. If metrics are available as impact evidence, they should, where possible, also capture any baseline or control data. The RQF pioneered the case study approach to assessing research impact; however, with a change in government in 2007, this framework was never implemented in Australia, although it has since been taken up and adapted for the UK REF. Indicators were identified from documents produced for the REF, by Research Councils UK, in unpublished draft case studies undertaken at Kings College London or outlined in relevant publications (MICE Project n.d.). The advantages and disadvantages of the case study approach. It is a process that involves careful gathering and evaluating of data on the actions, features, and consequences of a program. HEFCE indicated that impact should merit a 25% weighting within the REF (REF2014 2011b); however, this has been reduced for the 2014 REF to 20%, perhaps as a result of feedback and lobbying, for example, from the Russell Group and Million + group of Universities who called for impact to count for 15% (Russell Group 2009; Jump 2011) and following guidance from the expert panels undertaking the pilot exercise who suggested that during the 2014 REF, impact assessment would be in a developmental phase and that a lower weighting for impact would be appropriate with the expectation that this would be increased in subsequent assessments (REF2014 2010). These metrics may be used in the UK to understand the benefits of research within academia and are often incorporated into the broader perspective of impact seen internationally, for example, within the Excellence in Research for Australia and using Star Metrics in the USA, in which quantitative measures are used to assess impact, for example, publications, citation, and research income. While assessments are often equated with traditional testsespecially the standardized tests developed by testing companies and administered to large populations . different things to different people, and it is primarily a function of the application, as will be seen in the following. 0000007307 00000 n One might consider that by funding excellent research, impacts (including those that are unforeseen) will follow, and traditionally, assessment of university research focused on academic quality and productivity. RAND Europe, Capturing Research Impacts. Evaluation of impact is becoming increasingly important, both within the UK and internationally, and research and development into impact evaluation continues, for example, researchers at Brunel have developed the concept of depth and spread further into the Brunel Impact Device for Evaluation, which also assesses the degree of separation between research and impact (Scoble et al. 0000006922 00000 n Test, measurement, and evaluation are concepts used in education to explain how the progress of learning and the final learning outcomes of students are assessed. (2007), Nason et al. Over the past year, there have been a number of new posts created within universities, such as writing impact case studies, and a number of companies are now offering this as a contract service. trailer << /Size 97 /Info 56 0 R /Root 61 0 R /Prev 396309 /ID[<8e25eff8b2a14de14f726c982689692f><7a12c7ae849dc37acf9c7481d18bb8c5>] >> startxref 0 %%EOF 61 0 obj << /Type /Catalog /Pages 55 0 R /Metadata 57 0 R /AcroForm 62 0 R >> endobj 62 0 obj << /Fields [ ] /DR << /Font << /ZaDb 38 0 R /Helv 39 0 R >> /Encoding << /PDFDocEncoding 40 0 R >> >> /DA (/Helv 0 Tf 0 g ) >> endobj 95 0 obj << /S 414 /T 529 /V 585 /Filter /FlateDecode /Length 96 0 R >> stream Thalidomide has since been found to have beneficial effects in the treatment of certain types of cancer. evaluation practice and systems that go beyond the criteria and their definitions. %PDF-1.3 The traditional form of evaluation of university research in the UK was based on measuring academic impact and quality through a process of peer review (Grant 2006). 2007). For more extensive reviews of the Payback Framework, see Davies et al. Here is a sampling of the definitions you will see: Mirriam-Webster Dictionary Definition of Assessment: The action or an instance of assessing, appraisal . Different authors have different notions of educational evaluation. What is the Concept and Importance of Continuous and Comprehensive Evaluation. Research findings including outputs (e.g., presentations and publications), Communications and interactions with stakeholders and the wider public (emails, visits, workshops, media publicity, etc), Feedback from stakeholders and communication summaries (e.g., testimonials and altmetrics), Research developments (based on stakeholder input and discussions), Outcomes (e.g., commercial and cultural, citations), Impacts (changes, e.g., behavioural and economic). n.d.). New Directions for Evaluation, Impact is a Strong Weapon for Making an Evidence-Based Case Study for Enhanced Research Support but a State-of-the-Art Approach to Measurement is Needed, The Limits of Nonprofit Impact: A Contingency Framework for Measuring Social Performance, Evaluation in National Research Funding Agencies: Approaches, Experiences and Case Studies, Methodologies for Assessing and Evidencing Research Impact. n.d.). The Payback Framework has been adopted internationally, largely within the health sector, by organizations such as the Canadian Institute of Health Research, the Dutch Public Health Authority, the Australian National Health and Medical Research Council, and the Welfare Bureau in Hong Kong (Bernstein et al. Researchers were asked to evidence the economic, societal, environmental, and cultural impact of their research within broad categories, which were then verified by an expert panel (Duryea et al. For example, following the discovery of a new potential drug, preclinical work is required, followed by Phase 1, 2, and 3 trials, and then regulatory approval is granted before the drug is used to deliver potential health benefits. The Social Return on Investment (SROI) guide (The SROI Network 2012) suggests that The language varies impact, returns, benefits, value but the questions around what sort of difference and how much of a difference we are making are the same. Its objective is to evaluate programs, improve program effectiveness, and influence programming decisions. Standard approaches actively used in programme evaluation such as surveys, case studies, bibliometrics, econometrics and statistical analyses, content analysis, and expert judgment are each considered by some (Vonortas and Link, 2012) to have shortcomings when used to measure impacts. Dennis Atsu Dake. Without measuring and evaluating their performance, teachers will not be able to determine how much the students have learned. The exploitation of research to provide impact occurs through a complex variety of processes, individuals, and organizations, and therefore, attributing the contribution made by a specific individual, piece of research, funding, strategy, or organization to an impact is not straight forward. Such a framework should be not linear but recursive, including elements from contextual environments that influence and/or interact with various aspects of the system. Wooding et al. Wigley (1988, p 21) defines it as "a data reduction process that involves the . Published by Oxford University Press. evaluation of these different kinds of evaluands. This database of evidence needs to establish both where impact can be directly attributed to a piece of research as well as various contributions to impact made during the pathway. An evaluation essay is a composition that offers value judgments about a particular subject according to a set of criteria. And also that people who are recognized as authors, understand their responsibility and accountability for what is being published. Definition of Assessment & Evaluation in Education by Different Authors with Its Characteristics, Evaluation is the collection, analysis and interpretation of information about any aspect of a programme of education, as part of a recognised process of judging its effectiveness, its efficiency and any other outcomes it may have., 2. The case study does present evidence from a particular perspective and may need to be adapted for use with different stakeholders. One notable definition is provided by Scriven (1991) and later adopted by the American Evaluation Association (): "Evaluation is the systematic process to determine merit, worth, value, or . 2007) who concluded that the researchers and case studies could provide enough qualitative and quantitative evidence for reviewers to assess the impact arising from their research (Duryea et al. This transdisciplinary way of thinking about evaluation provides a constant source of innovative ideas for improving how we evaluate. The risk of relying on narratives to assess impact is that they often lack the evidence required to judge whether the research and impact are linked appropriately. We will focus attention towards generating results that enable boxes to be ticked rather than delivering real value for money and innovative research. What indicators, evidence, and impacts need to be captured within developing systems? Evaluation research aimed at determining the overall merit, worth, or value of a program or policy derives its utility from being explicitly judgment-oriented. Key features of the adapted criteria . Oxford University Press is a department of the University of Oxford. Despite the concerns raised, the broader socio-economic impacts of research will be included and count for 20% of the overall research assessment, as part of the REF in 2014. Reviewing the research literature means finding, reading, and summarizing the published research relevant to your question. Time, attribution, impact. << /Length 5 0 R /Filter /FlateDecode >> A comprehensive assessment of impact itself is not undertaken with SIAMPI, which make it a less-suitable method where showcasing the benefits of research is desirable or where this justification of funding based on impact is required. 0000003495 00000 n 0000346296 00000 n As such research outputs, for example, knowledge generated and publications, can be translated into outcomes, for example, new products and services, and impacts or added value (Duryea et al. From 2014, research within UK universities and institutions will be assessed through the REF; this will replace the Research Assessment Exercise, which has been used to assess UK research since the 1980s. 2007; Nason et al. The aim of this study was to assess the accuracy of 3D rendering of the mandibular condylar region obtained from different semi-automatic segmentation methodology. "Evaluation is a process of judging the value of something by certain appraisal." Characteristics of evaluation in Education Below are some of the characteristics of evaluation in education, Continuous Process Comprehensive Child-Centered Cooperative Process Common Practice Teaching Methods Multiple Aspects Continuous Process 60 0 obj << /Linearized 1 /O 63 /H [ 1325 558 ] /L 397637 /E 348326 /N 12 /T 396319 >> endobj xref 60 37 0000000016 00000 n Evaluation is a process which is continuous as well as comprehensive and involves all the tasks of education and not merely tests, measurements, and examination. 0000007777 00000 n A very different approach known as Social Impact Assessment Methods for research and funding instruments through the study of Productive Interactions (SIAMPI) was developed from the Dutch project Evaluating Research in Context and has a central theme of capturing productive interactions between researchers and stakeholders by analysing the networks that evolve during research programmes (Spaapen and Drooge, 2011; Spaapen et al. Many times . In undertaking excellent research, we anticipate that great things will come and as such one of the fundamental reasons for undertaking research is that we will generate and transform knowledge that will benefit society as a whole. 2009; Russell Group 2009). In education, the term assessment refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students. The Payback Framework enables health and medical research and impact to be linked and the process by which impact occurs to be traced. Measurement assessment and evaluation helps the teachers to determine the learning progress of the students. Definitions of Evaluation ( by different authors) According to Hanna- "The process of gathering and interpreted evidence changes in the behavior of all students as they progress through school is called evaluation". Impact is assessed alongside research outputs and environment to provide an evaluation of research taking place within an institution. The definition of health is not just a theoretical issue, because it has many implications for practice, policy, and health services. 0000348082 00000 n Times Higher Education, Assessing the Impact of Social Science Research: Conceptual, Methodological and Practical Issues, A Profile of Federal-Grant Administrative Burden Among Federal Demonstration Partnership Faculty, Department for Business, Innovation and Skills, The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental and cultural returns of publicly funded research, Reforming the Evaluation of Research. different meanings for different people in many different contexts. Metrics have commonly been used as a measure of impact, for example, in terms of profit made, number of jobs provided, number of trained personnel recruited, number of visitors to an exhibition, number of items purchased, and so on. Aspects of impact, such as value of Intellectual Property, are currently recorded by universities in the UK through their Higher Education Business and Community Interaction Survey return to Higher Education Statistics Agency; however, as with other public and charitable sector organizations, showcasing impact is an important part of attracting and retaining donors and support (Kelly and McNicoll 2011). In this article, we draw on a broad range of examples with a focus on methods of evaluation for research impact within Higher Education Institutions (HEIs). 2. Why should this be the case? (2008), and Hanney and Gonzlez-Block (2011). It is acknowledged in the article by Mugabushaka and Papazoglou (2012) that it will take years to fully incorporate the impacts of ERC funding. Although it can be envisaged that the range of impacts derived from research of different disciplines are likely to vary, one might question whether it makes sense to compare impacts within disciplines when the range of impact can vary enormously, for example, from business development to cultural changes or saving lives? To enable research organizations including HEIs to monitor and manage their performance and understand and disseminate the contribution that they are making to local, national, and international communities. It has been acknowledged that outstanding leaps forward in knowledge and understanding come from immersing in a background of intellectual thinking that one is able to see further by standing on the shoulders of giants. Although metrics can provide evidence of quantitative changes or impacts from our research, they are unable to adequately provide evidence of the qualitative impacts that take place and hence are not suitable for all of the impact we will encounter. (2011) Maximising the Impacts of Your Research: A Handbook for Social Scientists (Pubd online) <, Lets Make Science Metrics More Scientific, Measuring Impact Under CERIF (MICE) Project Blog, Information systems of research funding agencies in the era of the Big Data. The definition problem in evaluation has been around for decades (as early as Carter, 1971), and multiple definitions of evaluation have been offered throughout the years (see Table 1 for some examples). The transfer of information electronically can be traced and reviewed to provide data on where and to whom research findings are going. In this case, a specific definition may be required, for example, in the Research Excellence Framework (REF), Assessment framework and guidance on submissions (REF2014 2011b), which defines impact as, an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia. 1. It is now possible to use data-mining tools to extract specific data from narratives or unstructured data (Mugabushaka and Papazoglou 2012). A discussion on the benefits and drawbacks of a range of evaluation tools (bibliometrics, economic rate of return, peer review, case study, logic modelling, and benchmarking) can be found in the article by Grant (2006). This framework is intended to be used as a learning tool to develop a better understanding of how research interactions lead to social impact rather than as an assessment tool for judging, showcasing, or even linking impact to a specific piece of research. From the outset, we note that the understanding of the term impact differs between users and audiences. 0000004019 00000 n A Review of International Practice, HM Treasury, Department for Education and Skills, Department of Trade and Industry, Yes, Research can Inform Health Policy; But can we Bridge the Do-Knowing its been Done Gap?, Council for Industry and Higher Education, UK Innovation Research Centre. Any tool for impact evaluation needs to be flexible, such that it enables access to impact data for a variety of purposes (Scoble et al. In endeavouring to assess or evaluate impact, a number of difficulties emerge and these may be specific to certain types of impact. Here we address types of evidence that need to be captured to enable an overview of impact to be developed. The RQF was developed to demonstrate and justify public expenditure on research, and as part of this framework, a pilot assessment was undertaken by the Australian Technology Network. This distinction is not so clear in impact assessments outside of the UK, where academic outputs and socio-economic impacts are often viewed as one, to give an overall assessment of value and change created through research. 1.3. In the UK, the Russell Group Universities responded to the REF consultation by recommending that no time lag be put on the delivery of impact from a piece of research citing examples such as the development of cardiovascular disease treatments, which take between 10 and 25 years from research to impact (Russell Group 2009). Also called evaluative writing, evaluative essay or report, and critical evaluation essay . An alternative approach was suggested for the RQF in Australia, where it was proposed that types of impact be compared rather than impact from specific disciplines. The . Perhaps it is time for a generic guide based on types of impact rather than research discipline? The Economic and Social Benefits of HRB-funded Research, Measuring the Economic and Social Impact of the Arts: A Review, Research Excellence Framework Impact Pilot Exercise: Findings of the Expert Panels, Assessment Framework and Guidance on Submissions, Research Impact Evaluation, a Wider Context. To understand the method and routes by which research leads to impacts to maximize on the findings that come out of research and develop better ways of delivering impact. For example, the development of a spin out can take place in a very short period, whereas it took around 30 years from the discovery of DNA before technology was developed to enable DNA fingerprinting. An empirical research report written in American Psychological Association (APA) style always includes a written . Author: HPER Created Date: 3/2/2007 10:12:16 AM . Cooke and Nadim (2011) also noted that using a linear-style taxonomy did not reflect the complex networks of impacts that are generally found. Clearly there is the possibility that the potential new drug will fail at any one of these phases but each phase can be classed as an interim impact of the original discovery work on route to the delivery of health benefits, but the time at which an impact assessment takes place will influence the degree of impact that has taken place. Frameworks for assessing impact have been designed and are employed at an organizational level addressing the specific requirements of the organization and stakeholders. Figure 2 demonstrates the information that systems will need to capture and link. These . The Payback Framework systematically links research with the associated benefits (Scoble et al. The fast-moving developments in the field of altmetrics (or alternative metrics) are providing a richer understanding of how research is being used, viewed, and moved. If knowledge exchange events could be captured, for example, electronically as they occur or automatically if flagged from an electronic calendar or a diary, then far more of these events could be recorded with relative ease. At least, this is the function which it should perform for society. Enhancing Impact. Any person who has made a significant . Worth refers to extrinsic value to those outside the . 0000334683 00000 n These traditional bibliometric techniques can be regarded as giving only a partial picture of full impact (Bornmann and Marx 2013) with no link to causality. The time lag between research and impact varies enormously. Metrics in themselves cannot convey the full impact; however, they are often viewed as powerful and unequivocal forms of evidence. 0000007223 00000 n A total of 10 Cone beam computed tomography (CBCT) were selected to perform semi-automatic segmentation of the condyles by using three free-source software (Invesalius, version 3.0.0, Centro de Tecnologia da . The understanding of the term impact varies considerably and as such the objectives of an impact assessment need to be thoroughly understood before evidence is collated. The REF will therefore assess three aspects of research: Research impact is assessed in two formats, first, through an impact template that describes the approach to enabling impact within a unit of assessment, and second, using impact case studies that describe the impact taking place following excellent research within a unit of assessment (REF2014 2011a).