ECOSOC PANEL ASSESSES HOW UN AGENCIES IN FIELD IMPLEMENT LESSONS LEARNED FROM EVALUATIONS
Press Release ECOSOC/6062 |
ECOSOC PANEL ASSESSES HOW UN AGENCIES IN FIELD
IMPLEMENT LESSONS LEARNED FROM EVALUATIONS
(Reissued as received.)
GENEVA, 4 July (UN Information Service) -- The Economic and Social Council (ECOSOC) this morning held a panel which assessed the extent to which United Nations funds, programmes and agencies at the field level learned lessons from their evaluations and formulated proposals to improve the feedback mechanisms.
Introducing panellists and making introductory remarks, Patrizio Civili, Assistant Secretary-General for Policy Coordination and Inter-Agency Affairs who was the moderator, addressed the need to enhance the effectiveness of the operational activities of the United Nations system at the country-level. The United Nations system must learn from its evaluation and monitoring activities in order to determine what worked –- and what did not work -- in order to achieve a qualitative improvement in development cooperation.
Evaluation was often seen to be about lessons learned and accountability, said Colin Kirk, Head of the Evaluation Department for International Development of the United Kingdom. However, a third purpose of evaluation was real organizational development allowing organizations to manage, change and confront challenges in a changing world. In this context, Alan Nural, Deputy Director of the Evaluation Office of the United Nations Development Programme (UNDP), said that today’s demands on development agencies had to do more with less; provide cost-effective and creative solutions; be results-oriented; and function in a globalized world when their traditional comparative advantage was fading. Therefore, building knowledge capital and learning to innovate was an imperative.
Hans Lundgren, Adviser on Aid Effectiveness at the Secretariat of the Aid for Development Committee of the Organization for Economic Cooperation and Development (DAC/OECD); Luciano Lavizzari, Director of Evaluation Office at the International Fund for Agricultural Development (IFAD); and Mahesh Patel, Director of the Evaluation Office at the United Nations Children's Fund (UNICEF), focused on the guiding principles of their respective organizations.
Mr. Lavizzari said that the guiding principles of IFAD’s independent evaluation context focused on strengthening independence, accountability, learning and partnership. This formed partnerships among the users of evaluation results, moved evaluation to a higher plane, promoted the consolidation of evaluation results at the organizational level and made the management responsible for ensuring implementation and monitoring.
Mr. Patel explained that UNICEF incorporated a conceptual framework of causality and related results-based programming and managing into each programme, integrating evaluations into UNICEF’s entire planning process.
However, Mr. Lundgren said there was a need to increase interaction between the United Nations and the DAC Network on Development Evaluation; explore the potential for more United Nations system evaluation at the country level; consider ways to improve United Nations agencies evaluation functions building on the experience of DAC Peer Review mechanisms; have more active follow-up on recommendations in evaluation reports; and ensure that the evaluation agenda was relevant to the priorities of the partner countries.
Massimo d'Angelo, Chief of the Development Cooperation Policy Branch of the Department of Economic and Social Affairs, said that in spite of the intensity of evaluation activities, there were inadequate experiences so far for collective lesson learning and much more needed to be done.
On the panel were also evaluation experts working in Colombia and Kenya. Eduardo Wiesner, Senior Partner of Wiesner and Asociados in Colombia, said evaluation systems could not be effective if they did not aim to change policies and the allocation of resources and decided whether to continue a programme or not. Only an incentive-based evaluation, with a fear of changed policies and a cut of resources, would lead to efficiency and to sustainable development. Dharam Ghai, an independent Evaluator Consultant working in Kenya, emphasized that United Nations evaluations must not be conducted in terms of a simple economic analysis alone. Assessing the impact flows of development programmes required the incorporation of complex social dimensions.
A subsequent interactive debate focused on evaluation in the new context of managing for results, ways to improve evaluation in development agencies, accountability and lesson learning. Amongst other things, speakers raised concerns about learning from others’ evaluations, cooperation on results-based management and the need to utilize local expertise. One speaker stressed that assessments and evaluations must be a two-way street, focusing both on recipients and donors. Accountability, both on the supply side and the demand side, was also a two-way street, another speaker pointed out.
Participating in the interactive segment were representatives of the Republic of Congo, Nigeria and the United Kingdom. A representative of the United Nations Joint Inspection Unit also addressed the panel.
ECOSOC will reconvene at 3 p.m. this afternoon to hear from the United Nations Country Team for Senegal.
Document
There is a report of the Secretary-General on operational activities of the United Nations for international development cooperation: assessment of the lessons learned by United Nations organizations from evaluation activities at the field level (E/2003/64). The report responds to a General Assembly resolution which requested the Secretary-General to carry out an impartial and independent assessment of the extent to which the United Nations, at the field level, learn lessons from their evaluation and to formulate proposals on how to improve the feedback mechanisms at the field level. The report provides an assessment of how the United Nations system makes use of the available evaluation offices and relevant county-level evidence, focusing on the strengths and weaknesses of two processes: how the system identifies lessons to be learned at the country-level and how the system disseminates these lessons, once identified. The report suggests a few recommendations to the Council on how to enhance the evaluation function and its use at the country level, through measures that regard the individual organization or collaboration among the parts of the United Nations system, as a means of increasing the effectiveness of United Nations development cooperation.
Statements
COLIN KIRK, Head of Evaluation Department, Department for International Development of the United Kingdom, said that evaluation was often seen to be about lesson learning and accountability; a third purpose of evaluation was organizational development. Evaluation helped organizations to manage change in a changing world and to confront challenges. In terms of how lessons could be learned from evaluation, it was important to recognize the “feedback fallacy” in traditional “rational policy making” contexts. In the real world, policymaking was more of a soup than a linear process, in which long communication chains and too forceful support about the results of evaluation could lead to a distortion of the lessons learned.
When thinking about the audience of evaluation results, he said, one should examine the audience’s demands, needs, interests and incentives, to monitor channels, media and participation in the communication of the lessons learned and to be aware of blind alleys and blockages. Another aspect for consideration was found in the purposes for which evaluation was used: were the lessons to be applied as learning for change or learning for results? Within the Department of Economic and Social Affairs, the lessons learned from evaluation addressed results, standard setting and service delivery. However, beyond lesson learning, evaluation processes should also allow an organization to manage, share and use evaluation knowledge. Evaluation should connect people to people.
HANS LUNDGREN, Adviser on Aid Effectiveness at the Secretariat of the Aid for Development Committee of the Organization for Economic Cooperation and Development (DAC/OECD), focused on evaluation in the new context of managing for results; ways to improve evaluation in development agencies; accountability and lesson learning; and suggestions and recommendations. This was a new context, perhaps even a rather unique situation in development cooperation. This had implications for the role of evaluation. For the first time, a universal agreement on a comprehensive set of goals for United Nations members had been reached -– the Millennium Development Goals. There had been a decline in official development assistance in the last few years. Projected increases were dependent on a clear demonstration of results. United Nations agencies were, therefore, moving forward on results measuring and its management agenda. The way the DAC Network on Development Evaluation was approaching ways to improve the evaluation in development agencies was through sharing experiences, learning from each others and conducting joint work. Doing joint evaluations and synthesis studies one could learn from each other by working together. This also created studies that were relevant to more than one agency’s efforts.
The reason behind evaluation was accountability and for lesson learning. These were, however, not diametrically opposed concepts but could go together. Accountability was a main driver in evaluation. In strengthening evaluation units in bilateral and multilateral banks and the newly created IMF independent evaluation office, accountability had been at least as important as lesson learning. In his suggestions and recommendations, he stressed the need for more interaction between the United Nations and DAC Network on Development Evaluation; explore the potential for more United Nations system evaluation at the country level; consider ways to improve United Nations agencies evaluation functions building on the experience of DAC Peer Review mechanisms; more active follow-up on recommendations in evaluation reports were needed; ensure that the evaluation agenda was relevant to the priorities of the partner countries.
LUCIANO LAVIZZARI, Director of the Evaluation Office, International Fund for Agricultural Development (IFAD), said that the principles of IFAD’s independent evaluation context focused on strengthening independence, accountability, learning and partnership. The purposes of IFAD’s independent evaluation process was to form partnerships among the users of evaluation results, to move evaluation to a higher plane by conducting policy evaluations, programme evaluations and thematic evaluations in addition to project evaluations, to link evaluations to IFAD’s main business procedures, to promote the consolidation of evaluation results at the organizational level and to make the IFAD management responsible for ensuring the adoption of and monitoring the implementation of evaluation recommendations. Among the tools used in the process were an Annual Evaluation Work Programme, an Approach Paper to describe the evaluation framework, the Core Learning Partnership –- which then produced an Agreement at Completion Point, workshops, round tables and videoconferences, a Methodological Framework based on internationally-accepted criteria, and an Annual Report on the Results and Impacts of IFAD’s operations.
Among the recommendations he had for strengthening the traditional evaluation learning loop involved ensuring that there was: usefulness in the evaluation; ownership of evaluation results, findings and recommendations; joint learning by evaluators and users; formation of partnerships between those with different expectations, roles and interests; recognition of the special role of the poor; a strengthened position for poor people in their interactions with implementing agencies, governments and IFAD; an efforts to tailor communication to the needs of different audiences; and that the users’ bosses took responsibility for adopting and implementing evaluation recommendations.
ALAN NURAL, Deputy Director of the Evaluation Office of the United Nations Development Programme, said that today’s demands on development agencies was to do more with less; provide cost-effective and creative solutions; be results-oriented; function in a globalized world when their traditional comparative advantage was fading. Therefore, building knowledge capital and learning to innovate was an imperative. UNDP’s new orientation was based on knowledge-based learning, capacity development and support policy development, as well as result-based management and a corporate focus on results. Country level evaluation needed to be based on good practice, innovations, methodologies, policies and strategy, concepts and contacts. Knowledge sustained innovation and learning lessons enhances adaptability and leveraging knowledge. What made people learn, he asked. Lessons were learned when they were relevant, timely, addressed business priorities, support decision-making and were easily accessible, manageable and customized.
Result based management had slowly taken hold of the UNDP and the evaluation lessons were used to support and enrich the strategic management, programme development, effectiveness, decision-making, advocacy and creativity. The UNDP used a simple conceptual model around which it operated. This conceptual model included looking at the demand side at the country–level, as well as the supply side. As it stood, there was still a gap between the demand and supply sides which needed bridging. There were still lingering challenges within the evaluation system. More focus would have to be paid on deepening the evaluation cultures, doing what mattered, and supporting what worked.
MAHESH PATEL, Director of Geneva Evaluation Office, United Nations Children’s Fund (UNICEF), said that the integration of evaluations into the country-level planning process and the preparation of an integrated monitoring plan during the programme cycle served to make evaluation a fundamental part of planning processes at UNICEF. UNICEF incorporated into each situation analysis a conceptual framework of causality for each problem to be considered and related results-based programming and managing into the frameworks of each programme. UNICEF promoted the utilization focus of evaluations; decisions taken in choice of subject, design and implementation were to be based upon the use of the evaluation. It was also realized that communication of evaluation results was important.
Among its efforts to increase the standard of evaluation to which it submitted itself, he said that UNICEF had supported the creation of associations of evaluators in different countries and then contributed to their higher training. UNICEF had also worked to create regional groups for capacity-development, which then led to the creation of two international capacity-development groups. Moreover, UNICEF had incorporated quality control standards into their evaluation process and was moving toward requiring that quality control standards be made part of the implementation process.
Eduardo Wiesner, Senior Partner of Wiesner and Asociados in Colombia, said people had questioned why evaluation was not more effective. He believed that evaluations were not adequately effective in ensuring better management of funds in programme countries since resources were still flowing in that direction. The purpose of evaluation was to change policies and the allocation of resources, to decide whether to continue to do something or not. Therefore, real evaluations were those that changed policies. Once donor countries knew that resources would stop flowing given negative evaluations, efficiency levels would increase.
Many speakers had spoken about declining rates of ODA and resources. It was true that resources, such as the percentage of GDP of donors, had gone down. However, the absolute overall resources were not declining and were, therefore, continuing to flow. It was, therefore, important that evaluation be seen as an incentive if one really wanted to change policies. He concluded by saying that donors should not necessarily lower resources, but change the internal composition. Only an incentive based evaluation and subsequent results could lead to sustainable development. Most likely, contributions would increase when donors saw the improved efficiency of programme countries.
DHARAM GHAI, an independent Evaluator Consultant working in Kenya, said he wished to share his evaluation experiences within the United Nations system with the ECOSOC. In terms of methodology, he said that evaluation used to assess the rate of return on a project within an economics framework; it was a basic cost-benefit analysis. However, in his first experience of evaluation at the United Nations, he had had to assess the benefit that had accrued to small farmers from development projects in Bangladesh and Nepal. As he was required to assess the projects’ impact in areas such as nutrition, social reform and organizational and institutional development, he had to study impact flows, which were much more complex that simply calculating increases in resources following an economic evaluation model.
Twenty years after his first involvement in an evaluation project, he was asked to assess the evaluation of development and poverty reduction programmes in Viet Nam. In reviewing dozens of projects mounted by United Nations agencies, he was disappointed by the lack of evaluation that had been conducted on these individual programmes. Thus, he could conclude that the United Nations system still had a long way to go in building its capacity for evaluation. Also of importance for United Nations agencies was the need for external evaluation.
MASSIMO D’ANGELO, Chief, Development Cooperation Policy Branch of the Department of Economic and Social Affairs (DESA), said that in preparing a report, DESA had tried to interact and establish consultation with those who were the real users of evaluations and at the frontline of decision-making on the implementation of programmes. In the evaluation profession, many activities of DESA would not be considered as real evaluation since they lacked corporate rigor. Self-evaluation was also important and must be taken into account. These types of evaluation activities were not mutually exclusive, but interacted, fed and benefited from each other. The report verified that there was an increasing complexity of lessons learned from evaluation processes that needed to be addressed at different levels. In spite of the intensity of evaluation activities, there were inadequate experiences so far for collective lesson learning. Much more needed to be done.
Interactive Segment
Before the interactive discussion, the Vice-Chairman of the Council, Abdul Mejid Hussein, thanked panellists for their important contributions to this issue and encouraged United Nations programmes to learn from lessons and implement the required changes.
One speaker said his major concern was whether there was a real will to coordinate efforts in operational activities. He asked if the UNDP, when engaging in evaluations, took into account evaluations carried out by other players in the field.
Another speaker stressed that assessments and evaluations must be a two-way street, focusing both on recipients and donors. He also questioned whether, within the context of national ownership, there was any utilization of local expertise.
The focus on results was a drive in the process of change, one speaker said. However, there must also be a focus of accountability, both on the supply side and the demand side.
Responding, Mr. Wiesner underlined that once evaluation was seen as an incentive it would be demand driven. The method, even though important, would then be secondary. Mr. Patel stressed that donors had been encouraged to use the tools that were available to them, including an increased use of local expertise. He said the identification of questions and problems was easier through results based management since it allowed a clear identification of areas of focus and clear goals.
Mr. Narul said evaluations were at a crossroads since there was a conflict between balancing the organizational effectiveness with development effectiveness. The issue of the utilization of local expertise was an important issue since often external evaluations could not be applied to the field. Coordination and cooperation was ongoing within UNDP, both before and after evaluations were undertaken. On the subject of external evaluation, he stressed the need to maintain independence and impartiality.
A representative of the United Nations Joint Inspection Unit pointed out that her organization was a sort of independent evaluation unit, created by and reporting to the General Assembly. Whether the unit had been used as effectively as possible should possibly be an issue for the consideration of the United Nations’ Member States. The joint responsibility of the United Nations system and its bilateral partners was to assess whether the contribution given by the system had reached its maximum level of efficiency and efficacy.
Concluding Remarks
PATRIZIO CIVILI, Assistant Secretary-General for Policy Coordination and Inter-Agency Affairs, said it was clear that a common evaluation culture was emerging, but there was still a long way to go. There was a need to shift from a purely outcome specific approach to a more integrated approach.
* *** *