Productivity Assessment and Evaluation of Govt. R&D organization

The evaluation of research institutes is crucial to analysing the performance of the science sector and boosting the efficiency and cost effectiveness of public resources. The purpose of this study is to give a system for evaluating the performance of publically financed R&D institutes (high or low) (high or low). This paper discusses the “ Productivity Assessment and Evaluation of Govt. R&D organisation”. It outlines R&D productivity assessment and assessment to improve the evaluation of R&D with Key problems for measuring the socioeconomic benefits of public R&D and its approaches. R&D resources and publishing output also reviewed in this research. The descriptive survey was done among 38 Respondents to conclude the study. Case study is also considered to analyse R&D productivity and performance. Survey data analysed through SPSS.


Introduction
Research institutes play an important role in the development of research and innovation.Innovating for the long haul As a result of the skill and competence of highly qualified human resources, knowledgebased economic development has the potential to promote economic growth.Indonesian science and technology development is centered on research and development (R&D) with the goal of developing a variety of research products that serve the community and stakeholders."Scientific institutions are complex and dynamic organizations whose primary goal is to produce and disseminate scientific research within national economic systems in order to generate ideas and innovations that are becoming increasingly vital to a country's competitiveness."(J.Asmara et al., 2019) Globally, public R&D organizations are attempting to improve their performance as a result of increased competition brought on by liberalization and globalization, placing greater demands on available resources and holding themselves accountable for the most efficient use of those resources.Because the R&D process consumes limited resources, it is critical to evaluate its efficiency.Concerns about government efficiency have grown in recent years, especially in light of dwindling budgets.Demand for R&D performance evaluations is growing as a result of ever-increasing worldwide competitiveness.A performance measurement system, however, becomes vital in such a situation since it provides quality information to decision-makers.(Gangopadhyay et al., 2015) Authors and researchers interested in "Research-on-Research" did not pay much attention to the evaluation R&D initiatives.As a result, there were few articles or reports on the subject.Articles, books, etc. on R&D project evaluation and R&D people evaluation contrasted sharply with this assessment.The • Email: editor@ijfmr.com

IJFMR23056746
Volume 5, Issue 5, September-October 2023 2 literature on evaluating R&D programmes was mainly focused on the evaluation of industrial R&D projects and the measurement of R&D productivity, with a few exceptions.Also, the scarcity of literature reflected the relatively low level of interest governments had in analysing their R&D programmes and expenditures in order to determine both the effectiveness and the contribution to national innovation goals and priorities of their programmes.Companies and governments are now looking for ways to justify the large investments they make in R&D, which is a shift from the past.It has resulted in the publication of numerous papers and articles since 1986.R&D programme evaluation issues will be studied from a government R&D perspective in this literature analysis because to a significant overlap between personnel and programme evaluation concerns.
Researchers in the field of R&D management have stated time and time again that it is impossible to accurately quantify the productivity or innovation of an R&D programme, let alone an individual scientist inside it.According to Dr. A H Rubenstein, "technical managers must do a better job of evaluating their own operations, or this will be done for or to them" in his examination of technology management, research managers should not assume that such reviews should not take place.Concerns about who will establish the R&D performance metrics have been voiced."We should not be shocked if others devise appropriate performance measures for us if we fail to do so ourselves."(Clarke, 2000) Any civilization that aspires to civility must start with research as its foundation.The advancement of society is largely dependent on the human mind.A nation's success, on the other hand, hinges on how well researchers are able to address the needs of humanity.In fact, evaluating research output is a prerequisite for making investment decisions, governing science, and overseeing academia.As a result, the evaluation of research output has become an essential component of R&D institutions around the world.Many researchers have conducted a vast number of studies evaluating the publication output of an institution or a group of institutions (such as universities or departments).Sophisticated sciento metricians have long utilised quantitative methods to assess an individual scholar's scholarly output, whether in the context of a larger group or a single scientific institution.Evaluative sciento metrics is now being used by academics in other fields as well."Institutional research productivity" is the focus of this work, which provides an extensive survey of literature on the topic.In order to give an overview of current understanding on the subject, this paper is being written.It discusses the evaluation studies focusing on the Indian perspective in detail.It also refers to the researchers' efforts to incorporate new concepts that have been developed over time.This emerging field of sciento metric research has been mapped out with a great deal of work.In this field of study, however, it is important to identify the gaps and limitations in order to effectively articulate the concerns both addressed and unaddressed.A consistent picture of this confusing measuring issue in science is presented thus.Source: Pipeline Technical Resources

R&D PRODUCTIVITY ASSESSMENT
It's incredibly difficult to gauge the effectiveness of an R&D organization's work.An assembly-line worker's output to input ratio, such as the number of cars produced, is commonly used to measure productivity.Even though R&D's output may be quantified, the results themselves are sometimes ethereal and difficult to quantify.Because the return from an R&D department may take decades to materialize, the time lag is much greater than it would be in manufacturing.The sheer act of measuring could decrease R&D output, according to several experts, making this form of measurement unproductive.Despite this, corporations continue to employ crude ways to evaluate R&D in their desperate search for more effective, quantitative methods."We divided the R&D evaluation techniques into three general categories": Quantitative techniques: It is common for quantitative procedures to create numbers that may be compared with those from other projects and past experiences by following a specified algorithm or predefined ratio.These probabilistic weighing criteria can be used by key management to rate the project's effectiveness and importance.These numbers are then blended using a strict procedure, which will be explained in detail later in this paper.
Semi-quantitative techniques:Qualitative judgments are transformed into numbers using semiquantitative procedures.Quantitative techniques use formulas to compile data, whereas qualitative techniques do not.However, techniques like averaging can be applied to simplify the output.
Qualitative techniques:Intuitive judgments are at the heart of qualitative methods.Qualitative techniques will not be examined in depth due to the focus on quantitative methods and the lack of literature on qualitative methods.Qualitative methods, on the other hand, are becoming increasingly popular.
An important weakness was uncovered during our literature search and discussions with around 20 specialists in assessing R&D productivity, as well as the application of the methods already in use: Current measurement methodologies fail to clearly characterise the level of research that is being assessed.There isn't a single measurement method that works best for all stages of R&D because there are so many.It is vital to grasp this simple image before undertaking a meaningful analysis of R&D since it represents the current areas where quantitative measurements are most applicable.
The R&D stages can be defined as: 1. "Basic Research-directed to the search of fundamental knowledge".
2. "Exploratory Research-to determine if some scientific concept might have useful application".
3. "Applied Research-directed to improving the practicality of a specific application", 4. "Development-engineering improvement of a particular product or process", 5. Directed to product or process modifications that can improve their marketability, lower costs, and both.
A quantitative method is less useful in basic research since the results are typically too abstract.As a result, the majority of organisations utilise a qualitative approach based on the intuition of managers to analyse basic research.A rigid algorithm can be used to represent the outcome of product improvement, on the other hand, which is more easily quantified.As a result, even if it isn't stated explicitly, most quantitative methodologies utilised today are geared toward this stage of R&D.While a variety of methods are employed in the middle, the semi-quantitative approach is frequently the most effective.In most cases, the results of applied research cannot be clearly quantified.In light of today's technological advancements, a quantitative technique's strict algorithm is rarely useful.However, the product is typically not as abstract as basic research, such that qualitative judgments can be assigned numeric values.Thus, a semi-quantitative technique is the ideal measurement method for this stage of R&D.
There are three possible ways to evaluate a concept as it moves through the various stages of R&D, as depicted in.As a general trend, it should be highlighted that exceptions are possible.(Pappas& Remer, 1985)

1.3"ASSESSMENT TO IMPROVE THE EVALUATION OF R&D"
The evaluation of research and development is centered on assessment (R&D).It entails more than determining whether or not previous objectives were met.Research funding decisions include where, who, and how much to allocate, as well as assessing the potential societal impact.The economic and social implications of public R&D expenditure should be studied through an impact study (for example, improved health outcomes).Public policy formulation, from conception to implementation, can benefit from the findings of both impact assessments and R&D evaluations.Impact assessment helps government agencies set priorities in R&D spending and might be useful in formulating new research initiatives.Improves public accountability and develops an informed population; it also enhances awareness of the importance of public research in a country's economic and social advancement.

"Key challenges for assessing the socioeconomic impacts of public R&D"
The many social advantages of R&D investment are difficult to establish and quantify.R&D spill overs and unexpected consequences are major reasons for this, as are the fact that many significant scientific discoveries are made by chance ("serendipity") and the fact that many scientific applications are discovered in areas other than those expected.As a result, impact estimates can be premature and incomplete if public R&D takes a long period to yield all of its benefits.Last but not least, the noneconomic benefits of public research may be harder to describe and quantify.To relate health outcomes to public R&D spending is difficult since measuring health outcomes is complex.Defence R&D and security results, like energy R&D and security, face the same problems as both sectors.According to the OECD's Science, Technology and Industry report, the following are the most important challenges faced by science policy scholars and policymakers when analysing the effects of public R&D:

R&D resources
Data collection for national research policy began in early 1950s in Japan and the US, and it swiftly spread to a number of other countries.The findings couldn't be compared since each country used a different set of definitions to measure success.As R&D expenditures were more widely recognized as a key economic component, the need for accurate, comparative statistics arose."According to OECD stats, the first international R&D data year was 1963, which was introduced in 1962 by the Organization for Economic Cooperation and Development (OECD).They were generated using criteria laid out in the new Frascati Manual (full title: "The Proposed Standards in the Collection and Processing of Data on Countries' R & D Activities"), which set data collection and processing criteria.The Frascati Manual has undergone several revisions since its beginnings, most notably in the 2015 edition (OECD 2015a), which will be utilised in the production of R&D data in 2016 (OECD 2015).The OECD member countries' national R&D statistics experts drafted it (NESTI).As the most diverse sector in terms of R&D output, higher education (HES) is home to a wide range of institutions engaged in teaching and hospital activities.Universities have yet to be defined in the same way across the globe.Each country's system and resources are different, hence a variety of methods are used to suit the HES R&D material to their own system and resources.The Frascati Manual has been updated to better reflect current developments in the funding and conduct of research and development (R&D) in the modern world.The guidebook offers information on research worker categories, scientific topics, and several sorts of funding sources, all of which serve to introduce readers to basic R&D concepts and vocabulary.Higher education, government, business, and private non-profits all contribute different amounts of resources and personnel to R&D.The guidebook offers surveyors advice and best practices for R&D surveys in a variety of businesses.Understanding how science and technology contribute to economic progress is made easier by the Frascati Manual.They are used as a common language in discussions on research and innovation policies since they are widely accepted by the academic community.The International Organization for Standardization (ISO) produced this international standard for R&D research, which has since been adopted by governments and statisticians worldwide.The Manual's guidelines and suggestions are frequently used by a wide range of UN and EU-affiliated organisations.A decade ago, the OECD and Eurostat collaborated to collect economic data.UNESCO's Institute for Statistics (UIS) collaborates with other international statistical organisations to update their database annually.

Publication output
International comparisons of publishing output have traditionally relied on Thomson Reuters Web of Science; however, Scopus (operated by Elsevier) has proven to be just as informative.Measuring Innovation: A New Perspective concluded that using Scopus data was advantageous, for example scientific papers and other significant research outputs aren't represented in any of these bibliographic databases, which is in line with OECD R&D figures.However, there is still a lack of coverage in the social and humanities fields for the major scientific journals and medical and technology peer-reviewed conference proceedings.As part of our inquiry on national scientific system productivity, we're going deeper into the topic.This information will be analysed to see if there are any issues or dangers that need to be addressed.We'll employ a variety of ways and strategies to tackle the myriad problems we face.In order to conduct our research, we analysed data from 18 countries.The quantity of scientific publications and the number of influential scientists rank these countries as the most productive and influential in the world.A 20-year span of data was used to assess the current situation as well as previous changes.(Aksneset al., 2017) 2.1 Review of Literature (Laliene & Ojanen, 2016)Research Companies (ROs), like other organisations, are becoming more focused on the efficient use of limited resources by creating high-quality R&D products.It is impossible to apply traditional methods of performance evaluation to ROs, since their nature, work content, specificity and multidimensionality of the created goods, spontaneity of activities, data lag and other attributes differ greatly from those of business organisations.ROs productivity assessment method is introduced in this work and allows the evaluation of ROs activities from the standpoint of their activity categories.A multi-criteria decision-making approach is used to test the developed scheme.(Lalienė & Sakalas, 2014)Developing a company, country, or region's long-term competitiveness requires a focus on knowledge-based growth.A wide variety of research organisations arose in response to the massive increase in R&D needs.The success of many types of research organisations operations requires a uniform evaluation platform, which is a novel scientific topic.There are no existing theoretical or empirical studies on this topic, hence one of the primary goals of this research is to integrate the TRL approach into existing systems for evaluating R&D effectiveness in research organisations.When it comes to measuring technological advancement, the TRL approach has been updated and used to define the R&D activity types covered by each research organisation, which has an impact on the measurement settings.Using a newly developed evaluation system for scientific and infrastructural potential, research groups can be steered toward developing valuable inventions.

(Kahn & McGourty, 2009)Performance management is the only way to meet a company's or programs strategic goals. A direct link exists between what is being done and what stakeholders want and what the organisation plans to achieve. Organizations and programmes can maximize their assigned resources by
employing effective performance management methods.The usage of this tool can help to better explain the agency's goals, resources, output efficacy, and the overall worth of its outputs.This article examines R&D organisations and programmes from a performance management perspective.For R&D organisations, performance management is particularly challenging due to issues such as a lack of timely data and many unknowns, such as a lack of clarity in many projects' original scope.For both private and public R&D companies, researchers performed an investigation of performance management.Performance management is addressed by researchers in R&D, Science & Technology, and Intelligence.The conclusions of this study include five instances from the commercial sector and six examples from the government R&D sector.The use of quantitative and qualitative metrics to manage performance and increase long-term profitability is becoming more commonplace.Commercial and government R&D organisations and programmes employ the measures in this study.R&D is a critical component of any company's long-term strategy for success.Performance indicators for government entities should be chosen based on their goals and aspirations.Performance management systems that are supported by management and implemented with active employee participation are critical to a company's success.A long-term emphasis is evident in the company.It helps an organisation stay focused on the company's goals and critical outcomes.This technique communicates to the outside world the full value of agency outcomes and progress toward them, as well as goal alignment, resource targeting, and output effectiveness.Allowing R&D companies to build and implement their own performance management approach, with the support of management and active employee involvement, is the best method to communicate the true value of performance to their workforces.(Iorwerth, 2005)It is becoming increasingly common to assess the quantity and quality of a university's research output, and this paper examines why.Because of the growing competition among universities, evaluations can be an effective tool for determining how well research at those institutions is performing and providing incentives for researchers to improve their production and quality.Various approaches to assessing university research output are presented.

Research Methodology
This section focuses on the study's methodology, as well as ethical issues, data collection, and analysis.Research is an academic endeavour in its purest form.Researchers define and redefine challenges and formulate hypotheses or possible solutions before collecting data, analysing it, and drawing conclusions.Finally, they test the findings of their research to see whether they support their hypotheses or are based on incorrect assumptions.

Research Design
A study design is used to determine which research strategy is best suited for a given set of research objectives and factors.It is possible to develop a methodological strategy for collecting and analysing data by answering the research questions posed at the beginning of the project.Using a descriptive research design, this report analyses how well the organization's cloud, security, operations, and knowledge are performing both internally and externally.It is possible to conduct a descriptive study using both qualitative and quantitative methods.Rather than solely relying on quantitative methods, the study also makes use of qualitative ones in order to gain a fuller picture of the problem.Finding out what you want to find out, as well as gathering and analysing data from participants, are critical parts of conducting research.This study, titled "Research Productivity Assessment and Evaluation of Research at Govt.R&D organization".

Research Method
The survey method was used in this study.A survey was done among 38 randomly selected respondent relevant to R&D, aged 18 to 50 above, who were professional, student, serviceman and businessman, with a sample size of 38.Respondents (18-50 years old) were chosen as the target audience because they are frequent related to R&D.

Statistical Analysis 3.3.1 Data Analysis
Data analysis is the process of making conclusions from raw data in order to uncover and highlight key Evaluation of Research at Govt.R&D organization.Editing, coding, and data entry into the statistical software package are all part of data preparation, which includes earlier data analysis.Once the data has been collected, it must be analysed using software to ensure that the information is correct and easy to understand.A spreadsheet containing the data from the survey was created and adjusted to meet the study's specific requirements.It was necessary to code each respondent's answers numerically before entering them into the spreadsheet.The data was analysed and modelled with the SPSS software toolset.The SPSS 26.0 statistics programme was used to analyse the data in this study.The primary data was analysed using ANOVA test.Almost all of the items were subjected to Anova analysis, which highlighted the study's most important variables.Anova analysis is useful for interpreting and comparing data.It is the most fundamental way of representing all relevant data.It assists in acquiring a broad view of the consequences as a result of the collected data.

Research objectives:
• To produce new Knowledge.
• "To define the concepts of R&D productivity, R&D efficiency and R&D effectiveness"; • To examine current management practices for Research Productivity Evaluation and design and construct research productivity assessment tools and matrices.

Data analysis
A thorough explanation of how the questionnaire results should be interpreted.Analyses of Variance (ANOVA) were used in the research.When there are more than two variables, ANOVA is used to show the relationship between the variables, with one variable serving as the independent variable and the other as the dependent one.In Extent do you agree with the innovation (technical risk level) impact of the planned R&D program meeting acceptable levels its sig.value is 0.48 and f value is 0.84.I Think the technology will work its value is 0.01 which is significant because its value is less than significant level 0.05 and f value is 4.24.Agree that we are making technological progress as planned its sig.value is 0.00 and f value is 6.36.Agree that we are making technological progress as planned its sig.value is 0.00 and f value is 6.36.Source(s) for developing new or improved product its sig.value is 0.36 and f value is 1.55.Impact of the reveal linkages from Federal R&D to downstream outcomes its sig.value is 0.36 and f value is 1.09.Prospects for subsequent programs in the higher R&D categories its sig.value is 0.09 and f value is 2.25.Agree with the prospects for follow-up programs in the higher R&D categories its sig.value is 0.37 and f value is 1.06.The status of high scientific quality program research?Is it relevant, productive and well managed its sig.value is 0.16 and f value is 1.80.The program mechanisms, processes, and/or activities working its sig.value is 0.27 and f value is 1.34.Agree with the output productivity of the program compared to similar programs its sig.value is 0.53and f value is 0.74.Agree with the impact of proof of spillover from R&D its sig.value is 0.64 and f value is 0.56.Major activities involved in R&D Information Systems its sig.value is 0.33 and f value is 1.182.Major components of R&D information system its sig.value is 0.86 and f value is 0.24.Research and Development become the index of development of the country its sig.value is 0.06 and f value is 2.70.Agree with the impact of developing a productivity and evaluation of research for an R&D organization its sig.value is 0.08 and f value is 2.41.

Result
Survey result and as a case study considers two R&D activities, A and B that have some similarities and some differences.The effects of implementing a research productivity and evaluation system on an R&D organization indicate that R&D leads to technical development as expected, and that the technology will be effective.Positive outcomes for succeeding initiatives in the more advanced R&D brackets are also anticipated.

Discussion
Productivity evaluation for R&D organization can't be done by "turning the crank" according to a set of predetermined criteria.In spite of its subjectivity and complex nature, the evaluator's opinion is essential and should not be disregarded.Therefore, the competence, experience, and expertise of the assessors themselves are crucial.The assessment cannot establish the role that management or science/technology plays in obtaining high or poor performance until the management and research process, the R&D outputs, and the effect of the R&D outputs are all assessed.The present study has limited scope as conducted with limited sample size on the peoples involve in specific R&D activities thus having limited scope, this scope of study will be further enhanced by considering R&D activities in different technologies.
(Pal & Sarkar, 2020) • Email: editor@ijfmr.comIJFMR23056746 Volume 5, Issue 5, September-October 2023 3 Causality problem: "What is the relation between research inputs, outputs, outcomes and impacts"?• Attribution problem: "What portion of the benefits should be attributed to initial research and not to other inputs"?• Internationality problem: "What is the role of spill overs"?• Evaluation time scale problem: "At what point should the impacts be measured?• Definition of appropriate indicators: What are the appropriate indicators"?Since it is difficult to establish a link between R&D input and outcomes, traditional analysis has centered on collecting data on both.Many of the long-term economic and social benefits of public R&D are frequently overlooked in this type of analysis because they take time to show up.(OECD,n.d.)