#Evidence2018 insights
In a recent panel discussion at the #Evidence2018 conference in Pretoria, South Africa, panellists from Benin, Uganda and South Africa, discussed how government institutions made use of evidence for better informed policy-making. The panel discussion titled: Cross-Governmental Panel Discussion: Sharing Institutional Insights into Evidence Informed Policy-Making Approaches in Africa, also delved deeper into governmental landscapes unique to these countries.
Decision-making is something that happens daily in public sector programs and policy making in these countries. All three countries make use of systematic mechanisms to take evidence generated to policy makers who need it. This is to repair the disconnect between the people who need evidence and those who produce evidence. Thus resulting to significant efforts being done by the country departments for the use of evaluation results in the improvement of public services.
Data is collected through the various processes of researching, assessing, analysing and enquiring. Through these processes all three countries are able to compile evidence which is then used to know what needs to be done differently to increase impact, and to identify what works and what doesn’t work. Below are summaries of the presentations.
Benin
Benin has two levels of government; 22 Ministries on the National level and 77 Municipalities on the local or district levels. In 2007, the Bureau de l’Évaluation des Politiques Publiques et de l’Analyse de l’Action Gouvernementale (BEPPAAG) was established with two mandates, (1) Refine and implement the National Evaluation Policy; (2) Monitor the performance of departments and municipalities to improve service delivery. Located in the Presidency, this office has commissioned 24 national public policy evaluations in various sectors including, health, finance, agriculture, education, energy and water.
In order to analyse the processes for use of the results and recommendations of nine public policy evaluations conducted between 2010 and 2013, BEPPAAG undertook in a study on the use of the evaluation results. The general objective of this study was to ascertain the steps taken by line ministries for the implementation of evaluation recommendations to ensure the efficiency of public services. Significant efforts have being made since 2010 by the departments for the use of evaluation results in the improvement of public services. The study showed that during 2010 to 2013, eighty (80) recommendations were made from evaluations, of which seventy (70) formed part of the concerned ministries action plans to. The ownership of the recommendations looks as follows:
- 39 (56%) were fully implemented at the time of the monitoring mission.
- 31 (44%) were partially implemented at the time of the monitoring mission. Some are planned short and medium term.
- As for the 20 recommendations that are not implemented, the reasons given were because of a lack of financial resources or institutional framework
The study also notes that 40% of the recommendations have led to the revision or formulation of new public policies. These policies relate to technical education and vocational training, agriculture and handicrafts. In the energy sector for example, it led to the development of a rural electrification policy of 2016. Also noted by the study is that 10% of the recommendations have led to new institutional frameworks at two line ministries, plus another 8% of the recommendations have led to the formulation of new projects and programs in the agricultural, water, technical education and vocational training.
The main lesson learned from this study relates to the quality of evaluation as this is critical in the use of evaluation results. This can be measured in terms of the relevance of the recommendations of the evaluations. Also revealed was that some of the evaluation recommendations are very inclusive and require multiple steps or reforms, and this makes the process harder to implement.
South Africa
In the Department of Planning, Monitoring and Evaluation (DPME) in the Presidency, have put in place a framework for looking at evidence and knowledge systems. Data is collected through processes of researching, analysing and enquiring. Through visualisation and analysing data, the evidence is compiled and what comes out is knowledge on what needs to be done differently to increase programmatic impact. Programme managers and policy-makers can then use this information in the planning, implementation and budgeting/ resource allocation. Decisions are taken regularly by those who implement policies on the ground, and that shapes what is actually implemented. If there is failure by policy-makers to bring awareness or share their views on the evidence that informed the policy, the implementers of policies would often do what works for them, and that is still the norm today.
There are regulations in the public administration that are not sector-specific but determine much of the behaviour of public institutions. Those options include, complying with auditors, national treasury, court orders, etc… So the DPME took a serious look at incentives that shape decisions within the public sector and worked within them. This is to say that they collaborated extensively with public institutions or implementers of evidence to enhance the use of evidence, while still trying to find some compliance measures.
Government plays a very important role in shaping policy, but they are not the only people making policy. It is assumed that the government space is the place that makes policy and that it is relatively homogenic. But policy is also contested within government where questions of which outcomes matter most, the quality of the evidence generated and which should take political preference are some brought into question by the different public institutions.
Researchers can sometimes block other forms of knowledge where politics and cultural thinking are ignored thereby creating a monopoly over ways of knowing and communicating that knowledge. This impacts on whose voice is heard in EIDM/EIPM and can there be a creation of spaces for equal sharing and learning, where power is distributed more equally, and where different views be heard and appreciated as much as the voices of those who do evaluation and publish the evidence.
Uganda
In Uganda, decision-making is something that happens daily in the public sector programs and in the implementation of government projects. A lot of evidence that goes into the long processes of policy-making, and Uganda has a very consultative process. But the question about how evidence is actually used to inform policies is continually a point for discussion.
Uganda instituted a number of reforms to establish a robust National Monitoring and Evaluations System (NMES) to improve efficiency and to deliver positive development results. In this process it was discovered that there was little use of evidence in Uganda in the policy process. Noted examples were that the agricultural sectors annual plan was unchanged for the periods of 2007 to 2009. Systematic mechanisms are needed to take evidence generated to those who needed it. This is to repair the disconnect between the people who need evidence and those who produce evidence.
Between 1997 and 2007, Uganda was preoccupied with poverty reduction projects. At the time, there were no tools to assess if the projects were successful, in other words there were no monitoring of projects. And, another norm of the time, was focus on stabilising the economy and putting measures in place for economic growth. After 10 years of stability, they developed a 5 year national development plan. This plan allocates a significant amount of money to each sector and with that, many questions are being asked, such as:
- Is the government implementing the right policies?
- Are they doing the right activities?
- Are the right projects in place to lead that country to where it wants to go?
To answer those questions, what was required was better evidence, better evaluation systems and monitoring the performance of government projects. Due to the nationwide implementation of reform and the ensuing demand for evidence to show that government projects were delivering results, NMES was born.
- The Ugandan NMES detailed robust assessments of selected programme performance of the previous year in order to ensure that the following years’ programmes plans were informed by the performance of the previous year. All sectors were also required to demonstrate strong use of evidence on what was needed to achieve the outcomes set to achieve.
Of note is that most Ugandan research is done by local Universities with little engagement between government and researchers in terms of priority research areas or agenda setting. This translates to low exposure of policy-makers to new evidence. To combat this, the national research institute was formed and it has been particularly successful in the agricultural sector, where evidence of crops immune to diseases were widely accepted by farmers.
Going forward, Uganda will need to devise how best to take the culture of evidence use in the relevant departments forward so as to avoid a situation where only auditors or evaluation institutes are the ones using evidence.
Consultant needed for Rapid Evaluations in South Africa
A number of evaluation recommendations in government are not fully implemented due to a host of constraints, and thus the opportunity for learning and improvement are lost. These relate to time-constraints (delays in completing evaluations), financial challenges (evaluations are costly) and the lack of human capacity (general lack of experienced evaluators in the country). Additionally, challenges in programme/policy implementation and monitoring, country governments continuously faces emergencies requiring timeous and informed intervention strategies
To this end Twende Mbele and the Department of Planning, Monitoring and Evaluation (DPME) in South Africa are looking for a consultant/s to work with DPME to explore current practice in rapid evaluation tools for the public sector.
The following are expected outcomes from the project:
- A desk-top comparative analysis which explores existing rapid evaluation tools in sectors and compares them to enable judgement on which would be most ammenable to the South African Public Service.
- Two rapid evaluation tools, including relevant templates and guidelines for the application of the rapid evaluation approach for piloting by DPME.
Please read the full terms of reference here.
Applications close on Friday December 7th 2018.
From infancy to maturity: Constraints to the “Made in Africa Evaluation” (MAE) concept (Part 2)
By: Mokgophana Ramasobana and Nozipho Ngwabi
The blog titled “Made in Africa Evaluation: Africa’s novel approach towards its developmental paths (Part 1) provided an historical overview on some of the initiatives proposed to pioneer the MAE concept by various African scholars and evaluation practitioners. These include Prof. Zenda Ofir, Prof. Bagele Chilisa, Dr. Sukai Prom Jackson and Dr. Sully Gariba just to name a few. As a follow-up, Part 2 of the blog aims to explore some of the factors that influence maturity of the MAE concept beyond rhetoric and into practice, and raises a question around its uptake within the broader evaluation discourse.
The seed to collaborate on writing this blog between Nozipho and I has been forthcoming with a sense of speed. There are two cyclical international events that provoked our thinking as well as expedited the opportunity to co-write this blog. Firstly, a seminar titled “Decolonising the Evaluation Curriculum” hosted by CLEAR-AA during the delivery of the Development Evaluation Training Programme in Africa (DETPA). The panellists of the seminar comprised national and international evaluation experts: Prof. Bagele Chilisa (University of Botswana), Dr. Nombeko Mbava (University of Cape Town), Dr. Kambidima Wotela (University of Witwatersrand), Ms. Adeline Sibanda (AfrEA President) and Ms. Candice Morkel (CLEAR-AA) as the moderator. The second was a panel discussion titled “There is no Resilience without Equity: When will our Profession Finally Act to Reverse Asymmetries in Global Evaluation?” Chaired by Ms. Adeline Sibanda (AfrEA President) at the 13th European Evaluation Society (EES) Biennial Conference. Both events were characterised by heated debates among the panellists and participants and in this blog, Nozipho and I identified four key themes, which emerged as common threads. These four themes inhibit the deepening of the discourse around MAE, both conceptually and in practice. They include, (i) over-reliance on western worldviews or paradigms (ii) dominance of donors as commissioners of African evaluations (iii) Supply-chain Practices Crowd out African Evaluators and (iv) Perceived infancy of the evaluation profession in Africa.
(i) Worldviews or paradigms
The colonisation of African people in the 19th century had dire consequences of desecrating their traditional knowledge systems, cultural practices, values and beliefs (Kaya and Seleti, 2013). Scholars argue that Eurocentric or western worldviews of “knowledge” are yet to appreciate alternative non-western ways of knowing and producing knowledge. Consequently, the lack of this appreciation means that in the historical account of African or indigenous knowledge systems are less documented and evidenced in the broader academic discourses (ibid.). Likewise the evaluation profession is not immune from this influence of western paradigms. Thus, the theories informing evaluation practice in Africa are dominated by Western paradigms (Cloete, 2016).
Various African scholars (Chilisa, 2012); (Chilisa and Tsheko, 2017); (Shiza, 2013),and (Ofir, 2018) have impressively embarked on numerous initiatives, aimed at championing indigenous or localised African knowledge systems in the evaluation sector. These initiatives are geared to ensure that Afrocentric approaches, inter alia, methodologies, ways of knowing and philosophies are embedded into the evaluation praxis. Some of the studies elevating the Afrocentric paradigms include: indigenous knowledge systems (IKS) (Keane, 2008); (Geber and Keane, 2013); (Keane, Khupe, and Seehawe, 2017) and (Khupe and Keane, 2017) decolonisation and indigenisation of evaluation (le Grange, 2016) and (Chilisa, Major, Gaotlhobogwe and Mokgolodi, 2016) MAE or African-led or African and African “rooted” evaluations (Cloete, Rabie and de Coning, 2014; Chilisa , 2017 and Ofir, 2018) . These authors acknowledge that African voices and their ways of knowing should be integrated into a discourse of development. In spite of these commendable cited initiatives, African knowledge systems and paradigms remain insufficiently used, specifically in evaluation practice on the continent. We have to ask ourselves why this is the case.
To avoid the risk of providing a simplistic solution to a complex phenomenon, we recommend that opportunities should be created for collaboration between young and experienced African scholars to proactively pursue a research agenda around MAE and the translation of the findings into evaluation practice. However, this issue requires deeper conversations within the evaluation community around ways in which this shift in approach can be attained.
(ii) Dominance of donors as commissioners of African evaluations
Accountability for financial investments injected in Africa by donor communities elevated the demand for evaluation and has played a significant role in the institutionalisation of evaluation practices (Tirivanhu, Robertson, Waller and Chirau, 2018). This is corroborated by the African Evaluation Database (AfrED) database report (2017) commissioned by CLEAR-AA in collaboration with CREST for the period 2005-2015, which illustrates that: donors commissioned 69% of the evaluations, followed by a 31% split between NGO’s and governments. Notably, non-African evaluators in these reports have been appointed as project leads responsible for technical and strategic activities during the implementation of evaluation assignments whilst on the other hand, African experts are dispensed with supporting activities entailing administrative and logistical duties (Mouton and Wildschut, 2017). These disparities in roles and authority in evaluation assignments to some extent validate the widely held view that African scholars are less skilled to execute credible evaluations (Tirivanhu, Robertson, Waller and Chirau, 2018, p 230). Once again, a trite solution cannot be suggested for such a complex problem, but commissioners of evaluations (particularly donors) could consider revising procurement regulations geared to facilitate equivalent shared responsibilities between African and Western experts. In addition to capacity building initiatives that are focused on building African expertise in evaluation practice, it is time to also look at the legal-technical and administrative levers (such as procurement) that could provide a catalyst to changing the landscape of existing patterns of supply and demand on the continent.
(iii) Supply-chain Practices Crowd out African Evaluators
Building on the second theme, the evaluation field in Africa is historically and currently dominated by the Global North. (Cloete, 2016, p. 55) states that, “Evaluations in Africa are still largely commissioned by non-African stakeholders who mostly comprise international donor or development agencies that run or fund development programmes on the continent”. In addition, the current supply chain frameworks insist that evaluation expertise should be sourced from the development agencies’ countries of origin. This observation coincides with Phillips’s (2018) findings on a study of four major donors who commission evaluations in South Africa. The author found that the majority of international donor evaluation contracts in South Africa are obtained by international companies, who often sub-contract local expertise to enable them to understand the local context. This means that the evaluation criteria, methods and approaches are designed from a Global North orientation and that minimal effort is made to contextualise or ‘indigenise’ evaluations.
This situation raises concerns around the cultural competency of evaluators to conduct evaluations in African contexts, particularly if they are led by donor/development organisations who do not recognise the importance of this aspect of evaluation practice (AEA, 2011; Hopson, 2003 and Rebien, 1997). We acknowledge that more work needs to be done in developing a body of knowledge of Afrocentric paradigms, ways of knowing and methodologies in conducting and commissioning evaluations in Africa. Once this is available, a rich database of African methods could be made available globally. This will contribute towards the incremental documentation of Africa’s ways of knowing and elevating the indigenisation of evaluation practice as well as the prominence of African knowledge systems.
(iv) Perceived infancy of the evaluation profession in Africa
The slow progress of professionalisation of the evaluation discipline is common globally, as only a few countries have formally professionalised evaluation (Podems, 2015). M&E has not been professionalised in any of the African countries and this may be one of the main gaps in the slow progress of the Made in Africa concept. It is only in fairly recent years that monitoring and evaluation capacity building programmes such as the CLEAR Initiative, the International Programme on Development Evaluation Training (IPDET), trainings offered by Voluntary Organisations for Professional Evaluation (VOPEs) such as the African Evaluation Association (AfrEA) and the South African Monitoring and Evaluation Association (SAMEA), as well as universities have been developed to contribute to the growth of evaluation in Africa (Stockdill, Baizerman and Compton, 2002; Stewart, 2015; Denney and Mallett, 2017).
Scholars generally concur with sentiments that professionalising evaluation should be a priority (Montrosse –Moorhead and Griffith (2017), Podems and Cloete (2014) and Lavelle (2014). The idea of professionalisation appeals to those looking to improve quality control for the practice of evaluation, to address the problem of the lack of uniformity in the field and the roles of evaluation practitioners. Thus, without the standardisation of evaluator competencies on the continent (or one could argue globally) it is difficult to fit the ‘Made in Africa’ concept into the several other issues of standardisation we already have.
In summary, addressing the four constraints highlighted above to bring to maturity the MAE concept requires greater cohesion and more intensive championing amongst practitioners and scholars. As a way forward, it is proposed that a few disruptions are introduced into the system to stimulate change into the well-entrenched patterns of evaluation practice in Africa. These include: the intensification of research between experienced and young African scholars to establish a body of knowledge for MAE; adjustments to procurement practices, which could for example include a compulsory split between African and Western experts with equal shared responsibilities in evaluation assignments; need to commission and conduct inter-disciplinary evaluations and an expedited momentum towards the professionalisation of the evaluation practice in Africa.
Extension de la date limite de soumission des Resumes
En raison d’un certain nombre de demandes de prolongation, la date limite de soumission des résumés pour la 9ème conférence internationale d’AfrEA a été repoussée au lundi 26 novembre 2018.
La 9ème Conférence internationale de 2019 de l’AfrEA aura lieu du 11 au 15 mars 2019 à Abidjan, en Côte d’Ivoire. Le thème de cette conférence est « Accélérer le Développement de l’Afrique: renforcer les écosystèmes nationaux d’évaluation ». Par la présente, vous êtes invités à soumettre une proposition d’articles, d’ateliers, de panels, de tables rondes et d’expositions dans les 12 axes thématiques de la conférence, en français ou en anglais. Twende Mbele souhaite également adresser une invitation à soumettre, à Le rôle du pouvoir judiciaire, du pouvoir exécutif et du pouvoir législatif dans l’évaluation: systèmes d’évaluation nationaux réactifs :
- Quels sont les mécanismes potentiels pour renforcer les synergies entre différents acteurs au sein d’un écosystème d’évaluation national?
- Comment le pouvoir et la politique sont-ils exploités ou utilisés pour répondre aux défis de l’évaluation?
Veuillez consulter l’appel à soumission de résumés/propositions ci-joint pour plus d’informations et pour savoir comment soumettre un résumé/une proposition.
Les résumés peuvent être soumis en ligne. Pour soumettre et résumer, veuillez vous connecter sur le site web de AfrEA www.afrea.org
AfrEA attendons impatiemment vos propositions. Date limite de soumission 26 novembre 2018. Veuillez noter que AfrEA n’acceptons pas les résumés envoyés par email.
Abstract Submission Deadline Extension
Due to a number of requests for an extension, the submission deadline for Abstracts for the 9th AfrEA International Conference has been extended to Monday 26 November 2018
The 9th AfrEA International Conference 2019 will be taking place on 11 – 15 March 2019 in Abidjan, Cote D’Ivoire. The theme for this conference is “Accelerating Africa’s Development: Strengthening National Evaluation Ecosystems”
You are hereby invited to submit a Proposal for Papers, Workshops, Panels, Round-tables and Exhibitions under any the 12 conference strands in either French or English. Twende Mbele would also like to extend invitation for submission to the strand,The Role of the Judiciary, Executive and Legislature in Evaluation: Responsive national evaluation systems:
- What are potential mechanisms for strengthening synergies between different actors within a national evaluation system?
- How are power and politics being harnesses or utilised in responding to evaluation challenges?
Please see attached Call for Abstracts/proposals for more information and how to submit an Abstract/proposal.
Abstracts may be submitted online. To submit and abstract, please log onto the AfrEA website www.afrea.org. Please note that AfrEA does not accept submissions sent via email.