Moneval2

From africa-rising-wiki
Jump to: navigation, search

Africa RISING – CSISA M&E Meeting November 11-13, 2013; ILRI Addis Ababa, EthiopiaFinal program: File:AR - CSISA ME Meeting Agenda - FINAL.pdf


Documentation of the event:

  • Read an article about this event (upcoming on the Africa RISING website)
  • See presentations from this event
  • See [http:www.flickr.com/photos/africa-rising/sets/72157638078202313/| some pictures about this event] (Africa RISING set) and [http:www.flickr.com/photos/harvestchoice/sets/72157638293598533/| more pictures] (HarvestChoice set)

See the list of participants at the bottom of this page.


Monday, November 11th

Session 1: Welcoming remarks, project overviews and issues (9:00 – 10:15)

For both Africa RISING and CSISA, a speaker will give an overview of the project. Donors from USAID will focus on their priorities and interests in the projects, as well as the role of these projects in their agency’s larger development portfolio. IITA, CIMMYT, and IFPRI representatives knowledgeable about Africa RISING and CSISA will discuss the projects’ objectives, timeframe, geography, sites, and partners. They will also address the agricultural technologies and practices that best characterize the projects’ approaches and how these technologies/practices are being delivered or promoted. Regarding both projects’ M&E systems, they will address the main ways in which the projects are monitoring their activities and outputs, and the main ways in which the projects are evaluating their outcomes and impact. Based on their experiences with these M&E systems, they will discuss Africa RISING and CSISA’s performance to date and challenges in monitoring, evaluation, and data management for the projects.

Chair: Iain Wright (ILRI) Presenters:

Question and answer period

Coffee Break (10:15 – 10:30)

Session 2: Discussion on learning from monitoring and evaluation (10:30 – 11:45)

We begin the cross-project learning process with a discussion in which a panel of experts will be asked the following questions: (1) What is the purpose of monitoring and evaluation? (2) What makes a good evaluation? (3) What constitutes valid evidence? We will then conduct a lively moderated debate between audience members and the panelists on their experiences and challenges with monitoring and evaluation in CSISA and Africa RISING.

Moderator: David Spielman (IFPRI) Roundtable discussants: Peter Thorne (ILRI), Maggie Gill (DFID), Alemayehu Seyoum Taffesse (IFPRI)

Question and answer period

World Café Discussions: Sessions 1 & 2 (11:45 – 12:45)

Facilitator: Ewen LeBorgne (ILRI)

Discussion Questions:  In what ways do physical scientists’ and social scientists’ goals for monitoring and evaluation converge? In what ways do they diverge? Which methods do they use?  How do results from controlled field experiments translate to real-word outcomes?  How does one differentiate the M&E requirements of a research project from those of a development project?

Lunch (12:45 – 13:45)

Session 3: Monitoring Tools (13:45 – 15:15)

Africa RISING and CSISA have developed a set of tools designed to monitor research activities on the ground and satisfy management reporting requirements. In this session, we explore how these tools work and how they help project staff and stakeholders better accomplish their objectives. We explore how the flow of information within a project can be managed efficiently and effectively, how FTF monitoring indicators are managed, and how web-based interfaces can help with functions, tasks, and responsibilities. We further explore ways to improve monitoring with other tools, applications, and security systems.

Chair: Carlo Azzarri (IFPRI) Presenters:

Question and answer period

Coffee Break (15:15 – 15:30)

Session 4: Project monitoring and information management: Keeping track of activities and outputs (15:30 – 16:30)

Researchers from both Africa RISING and CSISA will give short presentations on how the project tracks its activities and outputs for donors, partners, and stakeholders. They will discuss the number and type of indicators their project tracks, including field trial, demonstration trial data, training events, training participants, and other relevant information. The presenters will also speak about the information they integrate from external sources, such as temperature and precipitation data, commodity prices, and facts about in-country infrastructure. In addition, they will address the need for custom indicators and discuss issues related to compilation, management, analysis, and reporting. An open discussion on solutions for better project monitoring and information management will follow.

Chair: Irmgard Hoeschle-Zeledon (IITA) Presenters:

Question and answer period

Breakout Discussions: Sessions 3 & 4 (16:30 – 17:30)

Discussion Questions  How can web tools be used more effectively to track project progress?  What kinds of custom indicators have been useful in South Asia and Africa?  How can we ensure better information and data management?  How do we integrate M&E with wider project management?


Tuesday, November 12th

Session 5: Characterizing landscapes, communities, and households (9:00 – 10:30)

This session will focus on the tools, methods, and data used to better understand the landscapes, communities and households that are part of Africa RISING and CSISA. Emphasis will be placed on understanding the different approaches to the measurement of key variables relating to agro-ecology, poverty, and vulnerability, which aid in answering questions such as: How suitable is a technology/practice to a particular setting? How poor/wealthy are potential beneficiaries? How vulnerable are they to shocks? Topics for discussion may include the following: farming systems analysis; poverty and vulnerability analysis; and the analysis of integrated household surveys and panel survey data.

Chair: Adam Silagyi (USAID) Presenters:

Question and answer period

Coffee Break (10:30 – 10:45)

Session 6: Evaluating and forecasting technologies, practices, and dissemination/learning approaches (10:45 – 12:15)

set of senior socio-economists and biophysical and economic modelers will talk about the analytical value and limitations of experimental approaches. They will discuss randomized controlled trials (RCTs) and their use in establishing causality, exploring trade-offs, eliminating bias, and quantifying impact, as well as their limitations and pitfalls. They will also address non-experimental approaches (propensity score matching (PSM), regression discontinuity (RD), or others), qualitative methods, and mixed methods -- and their place in evaluation. Additionally, they will discuss choice experiments, games, auctions, and the use of hypotheticals to evaluate preferences and willingness to pay. They will also discuss how they use information from the field to calibrate models that forecast future scenarios. They will discuss the importance of translating micro-level household data into data at a landscape or national scale that predicts both patterns and impact of large-scale uptake. Furthermore, they will address the importance of integrated bio-economic modeling and system dynamics to integrate global climate model (GCM), land use, crop, livestock, water use, and partial/general equilibrium economic models to capture the many moving parts of a system. Examples of forecasting approaches include IMPACT, AGMIP, HarvestChoice, etc.

Chair: Mirja Michalscheck (Wageningen University) Presenters:

Question and answer period

Lunch (12:15 – 13:15)

World Café: Sessions 5 & 6 (13:15 – 14:30)

Discussion Questions:  How can socio-economic and biophysical modeling be integrated?  How can we embed farmer behavior in forecasting models and do we have the capacity to do it?  What are the pros and cons of farming system approaches?  What are the pros and cons of baseline/endline surveys and panel data?  What are the pros and cons of experimental approaches to evaluation?

Coffee Break (14:30 – 14:45)

Session 7: Review and Presentation of Discussions (14:45 – 16:00)

In a gallery run, presenters from the World Café and breakout groups will present the discussions from each of the sessions, with major ideas that each group shared.

Facilitator: Ewen LeBorgne (ILRI)

Social Event – Cocktail at ILRI Campus (18:00 – 20:00)

Wednesday, November 13th (Africa RISING Discussions) Session 8: Reflections (9:00 – 9:30)

The Africa RISING team will look back on the previous two days’ discussions and select priority points.

Session 9: Project Mapping Tool (9:30 – 10:30)

IFPRI and its collaborator Spatial Development International will a more in-depth look at the project mapping tool (PMT).

Presenter: Melanie Bacou (IFPRI), Todd Slind (Spatial Development International) See Project management monitoring tools

Discussion: What (additional) features do we like the PMT to have? What does it take to make the PMT useful for all stakeholders? What (potential) issues of concern are there in using the PMT? How do we address them?

Coffee Break (10:30 – 10:45)

Session 10: M&E of Africa RISING: General and Mega-site-specific Successes and Challenges (10:45 – 12:00)

In a fishbowl format, project coordinators from the three mega-sites will share goals and challenges and discuss about how AR household survey data can be used to help inform the research teams. M&E coordinators will share experiences, challenges, M&E plan and, when available, talk about preliminary results from AR baseline household surveys.

Chair: Irmgard Hoeschle-Zeledon (IITA) Presenters:

Lunch (12:00 – 13:00)

Session 11: Africa RISING Breakout Sessions (13:00 – 14:00)

M&E coordinators from the three Africa RISING mega-sites will share in greater detail their experiences and facilitate discussion about site-specific issues. IFPRI M&E coordinators will be joined by a representative from the research team to provide insight on monitoring-related issues faced by the research teams.

East and Southern Africa (ESA) Facilitators: Ainsley Charles (IFPRI), Festo Ngulu (IITA)

West Africa (WA) Facilitators: Justice Ajaari (IFPRI), Shaibu Mellon Bedi (IITA)

Ethiopian Highlands (EH) Facilitators: Beliyou Haile (IFPRI), Kindu Mekonnen (ILRI)

Session 12: Final Discussions and Concluding Remarks (14:00 – 15:00)

Coffee Break (15:00 – 15:15)

Session 13: Africa RISING Joint Mega-site Meeting (15:15 – 16:45)

Senior representatives from Africa RISING donors, implementers, and M&E team will meet to discuss successes, challenges, and next steps for the M&E system. (Participants: Irmgard Hoeschle-Zeledon, Justice Ajaari, Ainsley Charles, Mateete Bekunda, Asamoah Larbi, Tracy Powell, Carlo Azzarri, Beliyou Haile)


Notes of the meeting

Contents

Session 1 - Welcoming remarks, project overviews and issues[edit | edit source]

  • Question: How do you ensure integration?
  • Answer: innovation platforms, joint modeling, ex-ante and ex-post assessments. In our hubs we are opportunistic, we find farmers who are interested and spread out from there. From an M&E perspective that is really difficult because we can't build counter-factuals etc. But donors want to see development impact on the ground and these hubs are showing some results and success.
  • Q: What is the scale of action areas in Africa RISING and how does it relate to CSISA?
  • A: Kebeles in Ethiopia but it really changes from. In CSISA a village is about 2 million people. AR activities are targeted at village. In CSISA the focus is different. The technologies are diffuse, not targeted at very specific villages etc. The megasites (e.g. Ethiopian Highlands) are related to the hub action areas in CSISA.
  • Comment: the most successful technologies we see (using satellites, mobile phones etc.) are not benefitting from innovation platforms, are happening without our inputs.

Session 2 - Learning from monitoring and evaluation[edit | edit source]

Talk show with Peter Thorne (ILRI), Maggie Gill (DFID), Alemayehu Seyoum Taffesse (IFPRI).

  • Q: Why do we care so much about M&E?

A: (MMG) The world is changing much faster. Oil is not a given as in the past. We used to think about things more leisurely but now we have to be checking that we are on the right track and that the research is designed with full awareness of the context. M&E is not just about what you're doing but also about 'is there anything out there that has changed?'. We may have to tweak some things to give up (PT) We do different kinds of things. We used to do research that was unidisciplinary. We tended to monitor our work in a simple way and it was embedded. We need more complex M&E methods to asses our more complex multidisciplinary research (AST) The impact on the well being of people has become a significant imperative. In parallel, our standards have improved (due to the growing complexity), we are moving the standards.

  • Q: Do the M&E systems we are building give us an opportunity to learn? We're supposed to feed back our results to donors but do we have the leisure to learn.

(PT) The win is there but we have problems around interdisciplinary nature of the team, misunderstandings in the team (we don't all speak the same language), our project leader expectations are quite high (I want a button to press to give me real time data). We have to look at systems that come closer to that and look at what data are available. The intellectual background in our M&E is fine but some of the implementation is not so accessible, re: ongoing monitoring. (AST) One thing is important: to have an effect and have valid evidence. A good evaluation will produce evidence that is internally and externally valid, reliable (it can be reproduced in similar situations), relevant. E.g when evaluating the productive safety net programme in this country this exercise allowed us to learn for policy-makers and ourselves too. A number of studies show the impact of the program. Targeting has improved etc. One of our key lessons is that impact evaluation studies should not be used as an instrument of advocacy but of dialogue: they should give us opportunities to analyse and absorb that information.

  • Q: Do you think donors have the patience to see researchers and practitioners to go through this process of learning, adapting, changing course etc.?

(MMG) There is an increasing recognition within DfID with its new chief scientific advisor - 4 years ago - which is key on qualitative analysis and learning. We dedicate time for continual program development and other systems to access latest papers in a given area. We also have once a week on Monday afternoon 2 scientific advisors to get us through papers to discuss the context. We are making more time for learning and are more conscious about it. We are not critical to others about this. In the ISPC, we go through all 15 consortium programs and some have come up with a couple of really innovative learning elements in them - they attracted positive comments in the ISPC. Overall M&E was felt to not be quite there but that will be a continuous learning. We have to try and make the time for learning.

  • Q: Is there a learning culture in Africa RISING?

(PT) It's really interesting to compare AR with the Humidtropics (HT) programme. There and here we feel a sort of M&E lag - we are focusing too much on deliverables, research outputs etc. It's a frustration for Humidtropics. I think AR has a quite good learning culture. We have dedicated funds for comms, learning, knowledge etc. across the three projects and in each project. We have a learning event one month ago. We have some concerns about follow up from this. We're not bad compared with many projects.

  • Q: Sustainable intensification. For system agronomists, programs like AR and CSISA are contributing to this field. For a lot of years we have focused on improved cultivars and synthetic fertilizers etc. but we are now working on much broader agendas. If you scan science there's still little documentation about these new issues. Is that not related to the difficulty of measuring this? How about setting up evaluations for new programs that look at these integrated issues?

(AST) In our ESSP program we designed randomized control trial to isolate the impact of raw plants in producing improvements. The design looks into providing the same improved seeds, chemical fertilizers and seeding breeds for two randomly selected households (which came from 40 sites in 10 woredas). In parallel we involved DAs (extension agents) to have trial inputs in our research stations. ... We measure outputs (crop cut, total output, farmer expectation about yield) etc. The impact of solely raw planting ranges from 2 to 20%. The major lesson is that a particular technology (e.g. management practice) can have its results depend from many things so we need to learn from all these other aspects. Trials only are not enough. (PT) We've had discussions about RCT and we need to understand how to apply RCT to a certain issue. We found RCT difficult to use for what we are studying i.e. combinations of inputs at different system levels etc. We select certain interventions and adapt some etc. so we have to look at the evolutionary process. What we are looking for is measurement approaches that allow that flexible process e.g. qualitative, case-based studies. This brings us to the question about the level of quality that you need. Rather than one approach we need multiple approaches, triangulate. For the research process we're struggling with this combination. (AST) There is no single best way to come up with such evidence. We cannot be eclectic in an arbitrary way, we need to be systematic about it. Evaluations have to start early and to evolve. Multiple interventions/packages. There are impact evaluation techniques which allow for quantitative methods (e.g. matching) that can be applied at system level. We have to think creatively and be flexible and open-minded. (MMG) One of the things that impressed me in AR is the mention of sequencing, the necessity to have things in place before starting others. On nutrition, there is a big lack of evidence about it. Sanitation is a big driver of nutrition. This relates back to learning and 'what is it' as we progress, that makes us want to spend on this or that... We have to keep going with the sequencing. From the Science Forum we also had a good discussion on the use of qualitative methods to understand. When you can't differentiate but you can understand the processes that are unfolding it might be the best way to make progress... (PT) We have been doing a number of diagnostics to see how they complement each other (qual & quant) and in December we have a meeting in Oxford about the application of different methods at different scales. Questions from the audience:

  • (A. Kweil): How do you deal with observable methods such as matching etc. RCTs have their role but they're not applicable to all sorts of situations. Do we have to put up with the limitations of propensity score matching? What are your recommendations?
  • (MMG) How to get DfID funding aligned with Africa RISING funding looking at markets, demand etc.
  • (PT) What is the time scale of sustainability? Can we talk about scalability when considering context specificities and do we explore how to adapt technologies enough? We have to adapt technologies, not scale them rigidly...

Discussions for sessions 1 & 2[edit | edit source]

World cafe discussions

In what ways do physical scientists’ and social scientists’ goals for monitoring and evaluation converge? In what ways do they diverge? Which methods do they use?[edit | edit source]

Group 1A:

  • Outcome level: social, physical scientists have the same goals
  • Tools: differ greatly; should harmonize M&E goals early (both are important; combining ==> richer data); cross-communication is necessary, methods should feed into each other
  • Social scientists should come in earlier
  • Physical scientists focus on process, social scientists on impacts
  • Difficult to converge:

Interested in similar goals (methodology diverges) Tools and design can be similar: qualitative data (context, calibrate numbers), data quality built into design is ideal, need to collaborate earlier Donor-driven Driven by numbers

  • Areas of divergence: Practicality:

Get carried away by own needs, methodology M&E methods not always well-adapted to location Stepwise, physical testing not always compatible with social scientists' M&E Social scientists don't spend enough time with farmers Specific tools:

  • Overlap in long-term assessments
  • Common tools are ideal (must happen in planning stages, happening well in Africa RISING e.g. many tools in common and not all are defined as M&E tools)
  • Generalizable/generalized indicators (soil pH, productivity)

Group 1B: How do they converge:

  • Demonstrate impact,
  • learn together,
  • validate,
  • ground truth,
  • assessing ex-ante,
  • increase livelihoods,
  • reduce poverty,
  • increase technology change and adoption
  • High quality evidence
  • Multi-scale analysis
  • Process monitoring: understanding processes of a mechanism (black box economists?)
  • Methods: reported/survey data from households + observation of subjective staff vs. observed data on a biophysical process

How do they diverge:

  • (Biophysical scientists): evaluated positively for DG, donors,
  • (Economists): shoot down bad ideas
  • (Biophysical-->socio):

Collection of credible data + types of data Analysis of data Controlled environment - people controlled (constant) vs. technology controlled (constant) Models/theories & facts/statistical constants

  • What type of (qual/quant) evidence?
  • Level of evidence... outputs (yield) vs. impacts (income)
  • Huge scales:

Soil microbiology... landscapes... global climate Household... community... global economy

How do results from controlled field experiments translate to real-world outcomes?[edit | edit source]

Group 2A:

  • Hard to replicate conditions you had when doing the experiment
  • Results are more useful for policy purposes...
  • Ease of replicability may depend on how similar the 'real world' is with the setting under which the experiment is done
  • Results may actually not translate into the real world
  • Question: How should research results (not necessarily) from controlled experiment translate?
  • On-farm research results are closer to real life
  • Irrespective of the level scale --> need for continuous experimentation and opportunities for learning and training
  • Availability of resources and who is the decision-maker?
  • Bottom-up/ demand-driven vs. up-down

E.g. Action research Good partnership (engagement) starting from planning stage.

  • Creating awareness --> demand
  • Market (-oriented/linkage) research activities
  • Internal validity: will the 'real world' farmers have access to all the different technologies, practices etc. 'experiment farmers' got?
  • Go for the extremes: sites with visible differences in factors that would affect the pathway?
  • Robust research design

"Enough" sample size No result to replicate in the real world!

Group 2B: FIRST GROUP: Mirja (nrm) / Eliud (socio) / Gaetano (socio) / Aklilo (socio) / Aster (bio) 1215pm

  • a weird question, not contradictory, already in the real world; if functions well, already have an example
  • redefine "real world": don't do intervention
  • RW is the farmers' world: what is the farmer able to do in RW
  • controlled experiment: directed inputs
  • not clashing, one is more ideal
  • what does control mean? not manipulation
  • translate: used by farmers, community groups in their normal activities
  • conduct experiments as own package; to translate, endusers in rural/urban areas, different community; experiments not adopted with full package
  • difficult to transfer
  • external influences: donors, may not be internalized by beneficiaries; demand v supply driven
  1. Assess context to allow to internalize
  2. Field experiments / RCTs (intensive)
  3. Give opportunity to transfer (suitable, acceptable)
  4. Feedback / M&E (outcomes)
  5. Select targeted interventions == scale (no longer RCTs)

SECOND GROUP: Ngulu (bio) / John (bio) / Irmgard (agro) / Britta (agro)

1235pm-

  • no agreement on weird question; need to have background information, there's a good case for RCTs to separate degrees of freedom
  • experiments do show potential of particular technologies
  • must test adaptability in different agroecologies and to match w/farmer context
  • changing 5 parameters (say), hard to understand
  • experiments at plot level, results from plot; what meaning for farm, landscape and beyond; is there a model to extrapolate
  • 'best-bet' technologies from experiments at what level: plot only; then need to try out

1. Useful and vital; stepping stone; credible data; how else to know what of set of interventions are transferable

2. Already specific

3. 'Translate': do good due diligence to make sure up front it's translatable

4. Has to be thought through up-front, otherwise wasting resources Highlights...

  • importance of context
  • RCTs useful, vital to learning
  • Importance of thinking-through up front, good due diligence (learning w/o context a waste), to ensure translatability
  • M+E important, tracking input/output/outcome links
  • Feedback / iterations
  • Scale

How does one differentiate the M&E requirements of a research project from those of a development project?[edit | edit source]

  • Research activities in the field are multidisciplinary and complex, so the M&E system needs to be flexible and adaptable to the different conditions.
  • There should be two M&E, according to the level if analysis/reporting: 1. AR program-level; 2. Intervention/package level.
  • The M&E system needs to look also at:

the process of research to try understand its dynamics the scalability of the project (how applicable is to other contexts/countries?) the sustainability dimension, which needs to look at with ex-ante modeling (e.g. the analysis carried out by the Wageningen team)

  • In general, development projects need to be more accountable for the funds spent and allocated than research projects, where funding is disbursed according to the research proposal submitted.
  • The M&E system of research projects (such as AR) and activities is more difficult as research output is less measurable than that of development projects, where outcome variable is clearly defined and can be monitored more easily.
  • Indeed, research focuses more on some particular types of output (number of papers, number of people in the workshop, etc.), while development projects focus more on impact (e.g. change in farmers’ behavior). Research should be leading to development that ultimately needs to have an impact.
  • For the above reasons, M&E system/components in development projects tend to be stronger than in research projects.

Session 3 - Monitoring tools[edit | edit source]

A presentation of the Project Management Tool (PMT) was given by Melanie Bacou and Todd Slind. Presentation: Project management monitoring tools

Session 4 - Project monitoring and information management: keeping track of activities and outputs[edit | edit source]

Presentations:

Africa RISING ESA project - Monitoring and information management

Africa RISING monitoring and evaluation activities in West Africa

Cereal Systems Initiative for South Asia: Monitoring and Evaluation

Q&A:

  • Q: Unless you do a lot of analysis, is it really worth collecting data about e.g. different varieties and their reactions to different fertilisers etc.?
  • A: This is not one person's job. And there are different roles for market etc.
  • Q: Is M&E costing interventions (from a project perspective)?
  • A: It's important but how to get there is not clear. We have global cost data per work package etc. and now we can look into crop sampling, land preparation etc. It's theoretically possible but not a priority right now.
  • Q: Do you use data for adaptive management - Is there a part for data used by communities? Do you have an estimate for time spent from data collection to data used for project management?
  • A: During the M&E workshop in Mali we had an opportunity to share the FtF data with partners.
  • Comment: we have to be careful about the level of details of activity etc. We are trying to get an agricultural economist full time in our projects to ensure this happens.

Discussions for sessions 3 & 4[edit | edit source]

How can web tools be used more effectively to track project progress?[edit | edit source]

Summary

  • Integrated data collection in o established
  • Eliminate latency between collection and
  • Provide reporting of formats and frequency useful to targeted
  • Enable offline collection

Full group notes

  • x - Integrate data collection tools into established work flows. Enable offline data collection.
  • x - Provide reporting in formats and frequencies that are useful to targeted audiences.
  • Allow users to “subscribe” to the indicators and locations that are meaningful to them.
  • Allow users to classify projects in ways that are meaningful to them.
  • Automated notifications for “passive" updates.
  • x - Eliminate latency between collection and presentation.
  • Design should be responsive and performant.
  • Cross-compatible with other programs’ progress reporting systems where appropriate and practical.
  • Flexible in summarizing progress along different dimensions, spatial and time scales.
  • Extensible for adding custom indicators.

What kinds of custom indicators have been useful in South Asia and Africa?[edit | edit source]

Summary

  • Indicators/goals defined in logframs(partial overlap with
  • Reduction in cost of production on
  • Willingness to pay for technology (buy seed, hire service
  • Productivity per unit water (or other
  • Institutional required indicators

Full group notes What purpose will custom indicators serve? Rapid, snapshot indicators collected more frequently than the baseline/endline surveys (frequency as needed), for adaptive management purposes. Areas of interest defined by project coordinators/chief scientists.

  • Some ambiguity regarding whether these indicators are intended to help monitor progress at the level of individual/package interventions, or program-level impacts. Unresolved question: How to confine monitoring to factors within the manageable interest of a research program, without neglecting responsibility for achieving development-level impacts where possible.

How will indicator data be collected? TBD depending on the indicators chosen and collection frequency, but probably by research teams or the regional M&E coordinator. However, there may be feasibility issues with each of these collection strategies (willingness to comply, limited time for additional data collection, farmer burden if these are participatory data). Not sure whether these would be universally applied across AR projects, or if the program would develop site-specific indicator sets.

  • Discussion in Q&A: CSISA experience was limited willingness/capacity of hub partners to provide data, and dedicated M&E person at each hub couldn't be under the authority of the hub director.
  • Interest in collecting custom indicators via electronic interface.

What broad areas require custom indicators?

  1. Farm System Intensification (consideration of both input- and output-driven efficiency gains)
  2. Sustainability Indicators:

Economic Sustainability (potentially with some sort of community-level metric that captures AR [either individual technology or program-level] contributions to local ag markets/economy) Environmental Sustainability (potentially biodiversity, soil health, water, climate forcing, etc.) Social Sustainability (affects on partnerships)

  1. Gender Equity? (Adequately captured by baseline survey?)
  2. Nutrition? (Adequately captured by baseline survey? We could use some sort of subjective nutrition measure, as per Carlo's data on subjective food availability correlating with wasting)

What specific indicators could we use to monitor progress on these broad areas? Discussion of Vital Signs indicators for Sustainable Intensification (with biodiversity, climate, H20, resilience, inclusive wealth, and other components), Soil Health, others. Group agreed to circulate the threads/indicators/indicator protocols developed by Vital Signs and pick out potential indicators that could be used by Africa RISING, based on appropriateness & feasibility (esp. whether indicator can be expected to shift over the lifetime of the project).

  • Also discussion of potential collaboration between VS and AR -- e.g. VS willing to consider an intensive AR intervention area as one of their 10x10 "case study" areas. This would allow VS to capture the in-depth experimental data generated, and AR to capitalize on the contextual data VS generates/curates.

Subjective/reported vs. quantitative indicators? Discussed the respective benefits/constraints of each... farmer-reported/qualitative indicators can be cheaper/quicker (i.e. better adapted to ongoing, adaptive monitoring), but could impose a burden on farmers? Potential to compare/validate parallel systems of participatory vs. observational data to assess sustainable intensification.

How can we ensure better information and data management?[edit | edit source]

Summary

  • Use door guideline for data management

Define key data outputs Use restrictions Release frequencies Data ownership Define roles and responsibilities (collection, analysis) Define primary data repository

  • Use communication and knowledge management (CKM) group for data curation and dissemination
  • Incentive /prioritize data curation and dissemination (e.g include data curation as co-authors ,track data downloads)
  • Reward research for perusing OA data policies

(Full group notes) Alwin / Maggie / Ngulu / Mel / Birhanu / Thomas / Fatima Mel

  • don't reinvent the wheel; some donors have guidelines on what a good data management plan, often a component of ME plan
  • key questions: what data outputs needed, what data will project generate
  • have checklist of key data output: primary, secondary
  • who is data owner: generator, curator
  • what are data dissemination channels: public domain (open access); so policies already exist
  • get drafts of DM plans: DFID (more detailed, fleshed out), USAID, research orgs also
  • what roles, responsibilities
  • IFPRI libraries w/people familiar w/curation, web-based data repositories; tapping into own centers, need to involve early in the process, not enough done now
  • common vocabularies, library people w/additional time
  • communications dep't make sure data is available to other audiences (blogs, maps, etc.)
  • money to be put aside for curation (more than for web-based tools)
  • having the right people / capacity to work with data
  • working with privacy
  • "public domain by default"
  • timeline issues: how soon can it be made available (no lag) rather than 2-year quarantine
  • (big shift for cg system)
  • "data police"
  • new philosophy? are publications worth more than a (well) curated dataset
  • creative commons license
  • incentive systems need to change at policy level first (publications, clean and curated datasets)
  • having "data curators" on list of authors
  • financial incentive for curating and publishing data

Maggie:

  • how much use will all this extra data be put to? (track data downloads, >50000)

Birhanu

  • collect landscape/watershed data (raw not available, summary available; can request)
  • publication issues, give 6mth-1yr quarantine

How do we integrate M&E with wider project management?[edit | edit source]

Summary

  • Challenge: M&E currently not sufficiently integrated with project management. an easy template for reporting on ftf indicators is available but not used /relevant for project management
  • Solutions :

Ideally: set up project management indicators complementary to FtF indicators Project management software useful to project management tool to enable automatic reports + real time monitoring Train on using data at every level as a feedback for project management – so it is a motivational tool Automatic reports feed into aggregated technical reports (at different levels) Full group notes:

  1. M&E is not integrated yet with the research activities on the ground
  2. The FTFMS template has been filled out, but it is not helpful for project management
  3. We need project management indicators to complement FTMS indicators
  4. Software/automated tool to automatically generate report/graphs/maps to give feedback. It should be seen as a motivational tool to enable real-time monitoring and use of the data at any given level.

Do we also need management indicators? Do we need burning rate indicators (periodic financial reports)? They might be useful to track progress.

Session 5 - Characterizing landscapes, communities and households[edit | edit source]

Presentations: Farming systems approach Vital Signs: an integrated monitoring system for agricultural landscapes

Spatial framework and characterization in Africa RISING and Vital Signs Characterizing households and communities for Africa RISING

Q&A session:

  • Q: Are you not measuring too much? (Simret please add)

A: ... We know that increasing yield etc. doesn't have a nutritional impact. In some cases maize production seems to have had some significant impact.

  • Q: Are we not collecting too much data? What do we do with all the data we are collecting? And how do you satisfy the requirements of all partners, beneficiaries, donors etc.? Whose role is it to serve that data? (Maggie Gill) Have you reflected upon who are the decision-makers? Isn't it a challenge of M&E to look at what you collect and checking cost implications...

A: (Roseline) There's been stakeholder meetings to discuss existing data and how they've been used, integrated indicators and discussions on the ground. It's a continous dialogue. We've designed these processes in the light of the discussions with these stakeholders in the public and private sector. We cover nutrition and stunting in our indicators and the latter is an integrated indicator. We are looking at it in terms of agriculture, health and environment. A: (Maria) We compensated farmers with soap. It was difficult to do the measurement with children. The interviews were split in 2 sessions. (Carlo) The questionnaire is really long but people keep suggesting other questions.

  • Q: Did people ask you what you were using that data for?

A: All the specificities of the questionnaire were mentioned by enumerators to explain this. They agree to do it. A (Peter Thorne): We did the questionnaire in Ethiopia recently and we agreed to not spend more than 2.5h at a time and decided to split questions for certain sections of the household (man / woman / together). I can't imagine that someone gives quality answers after 4h. If we review each question about 'why are we collecting this data?' we would find out we can cut down the questionnaire to 20%

  • Q: (Mateete) We talked about research teams doing the questionnaire and I wonder if we can compare their work with an external enumerator team?

A: (Carlo) In Tanzania there are a couple of companies that could take care of this but some are not using CAPI, are using another system etc. so we are trying to use the same methods in Ghana. Data from the field might be affected by the timing of the questionnaire too. We may have to look at factors affecting poverty levels. USAID advised us to customize their module but with this survey we can answer a lot of research questions. (Maria) About data quality, there was an option to implement a check on a specific field and between sections. The survey also supports audio auditing (at any given time the tablet can record the conversation and indicate if the interviewer is asking the questions etc.). (Carlo) We really have to use that data. Comment : In order to extrapolate the results we have to think about the context and adapt the questionnaire accordingly so we can learn from the evaluation. 2) In the impact evaluation we are trying to understand causality but we don't have experimental methods but quasi-experimental methods such as propensity score matching which requires a lot of data 3) In impact evaluations we also care about how we can do better on impact, looking at heterogeneity and how we can apply the evaluation to other contexts. Comment : A 5-h questionnaire is very heavy. The biggest issue is scaling e.g. doing baselines to other countries after Malawi. One donor is asking for certain issues. There is a lot of duplication between projects. Is there a way to coordinate so we avoid doing the same surveys?

  • Q: How do you characterize communal land?

A: In the questionnaire we look at land allocation across different users and ask stakeholders to look at e.g. communal water resources (customary / by law). We didn't get more specific answers than that.

  • Q: Do crops include trees?

A: Yes.

  • Q: How can we be more efficient to ensure modeling requirements are met?

A: (Mirja) In Wageningen we looked at what is required for the model and considered aspects that are covered by the model, we used Africa RISING data, we followed our reductionist survey (which still takes 2h, including 20 min of introduction).


Session 6 - Evaluating and forecasting technologies, practices and dissemination/learning approaches[edit | edit source]

Presentations: Economic policy analysis tools for sustainable intensification Crop modeling framework for strategic decisions Evaluation in Africa RISING

Questions and answers:

  • Q: Control vs. treatment: we have a control treatment so do we need a difference in difference in difference?

A: In economics we place the control treatment in a completely different setting whereas here in the same location we have the control group in the same village as the treatment group.

  • Q: How do you assess impact between farmers that benefited from one intervention and others who were not supposed to but did benefit from related interventions, the relative impact might be higher (it is underestimated) than what we observe because the control group benefited from other interventions. This is why we try to have more information to find out if there are other interventions that might affect the control group.

Comment : RCTs are seen as gold standard but I would ask for more critical views for this method because there are pitfalls. There are difficulties in implementing RCTs in practice with selection bias, incentive schemes etc. A:

  • Q: How are you going to measure the impact of Africa RISING. In our program we provide many (combinations of) interventions to farmers to improve productivity and soil condition etc. but at the end how can we state what we have achieved with Africa RISING.

A: We will never be able to say 'this is the impact of AR' but we can certainly compare with areas that are not affected by Africa RISING. A: (Peter Thorne) We can't afford to apply the standards because of the complexity of our interventions. But we want evidence and there's a continuum of evidence. We can't achieve the gold standard but we can have evidence that is triangulated from other evidence. (Carlo) We can't apply RCT but wee can apply difference in difference etc. so we are using mixed methods. (Tracy) Does CSISA deal with these types of issues? * (David Spielman): We had a mess with control villages etc. The interventions were variable and evolved and the program was emergent and opportunistic. We moved treatments around etc. and ultimately we decided not to go for an endline but for specific studies on particular markets, areas of study etc. So we have a lot of good evidence from RCT, other evidence on service providers, other on mechanisation etc. and we didn't get too tied up in this kind of discussions because these programs are too complex. However I like the idea of control, treatment and spillover because someone will come back to you asking about what you've been spending so much on...

Final comment: each method has its cost so it doesn't always make sense to go for the most rigorous method. What evidence gap are we facing and what do we want to know? If we want specifics on certain areas we can use rigorous methods on specific interventions and not for all...

Session 7 - Presentations from the group work sessions[edit | edit source]

Day 3 - Africa RISING specific program[edit | edit source]

Session 8 - Presentation about the project management tool[edit | edit source]

  • Q: I'm not sure that we can use this around 'work packages' for all countries because we don't have work packages in all countries. In addition, there are sites across various districts and districts with various sites.
  • A: We allow reporting to happen at multiple scales so it should work out.
  • Q: Is this information web-accessible? There might be anonymity issues around this, because sites are very specifically described, making it easy to recognise which farmers are zoomed in on.
  • A: Currently not and we certainly have to be smart about how we deal with anonymity issues.
  • Comment: We can also upload our own data on PMT and we would check the validity of this data as part of the security protocol.
  • Q: Any data uploaded to the database will be publicly viewable via the website?
  • A: In the CKAN catalogue, you can restrict access to certain organisation members etc. It's for the data curator to decide about this. Roles are readers, editors and super users.
  • Q: Can any of our data be considered politically sensitive by our partner countries?
  • A: Some information about pests / diseases could be used by some... We have to discuss issues around data privacy. Ethiopia and Tanzania are the most sensitive countries on this. Some of this discussion might have to influence the ethical guidelines we have developed.

Anyone against online data entry? After a year, we have to evaluate to which extent this has to be used, for what and whether the database has grown etc. Until you try the tool it's difficult to find out what works or not with it. We have to make some decisions quickly to customise the tool and then we can roll out training on PMT in the next 6 months. It's absolutely a tool that could make our life easier.

Beliyou Haile’s PMT-related note from Day 3 Sessions (Nov 13, 2013)

  • During Todd’s demo of the PMT, it was noted that:

o Features of work package (narratives, target technologies, partners, and contracts...) can be/should be summarized into the PMT o A web-based data collection template can be embedded into the PMT o The data collection template will work online and offline—the template will be sent to research teams for them to download and enter data regularly –database will be stored in their browser and can be uploaded into the system when there is internet access. o Tracy raised the issue of data confidentiality and Mel noted that this issue will be address in the data management plan that IFPRI will be developing o Mel also noted that a features can be added to the PMT so that users can interrogate the system to get summary statistics tables and graphs o When the database grows, there is a need for the PMT to allow filtering of data… * § by including a set of filter variables ---e.g., the type of technologies used, * § by embedding a time slider feature to allow users to get summary tables from previous years

  • o Institutional data from other sources can be integrated into the PMT

§ E.g., data on agro-dealers in Ghana

  • o Tracy asked if there will be some data that some governments may find sensitive and, if so, how to deal with this. She also noted that other USAID funded project (small scale irrigation, mission-funded projects) may end up using the PMT and we have to find a way on how best to integrate the different data (sources). Todd noted that the sensitivity of data/information has to be determined at a project level.
  • o There will be a PMT training, may be 1st to M&E coordinators who can then train representatives from each research team
  • Irmgard asked if we could/should assess the progress of the PMT (how many people used it, etc.) and Todd/Mel noted that this is doable.
  • Todd noted that current FtF indicators tab is at mega-site level. The group discussed if reporting FtF indicators at the level of the work package would be better than reporting at community level.

o But in some countries, reporting at the level of the work package may not be possible (since there are no work packages) in which case the M&E coordinators will have to enter data at the relevant level. o Also, a suggestion has been made to include project specific indicators Thomas Wobill noted the importance of having a feedback system into the M&E system/PMT and for making the PMT more interactive –to allow users to leave notes/questions for the M&E team. Maria noted that this can be done in CKAN. There was also a mention of using Yammer to make the PMT more interactive

Next Step

  • § Update the PMT based on feedback received at the M&E meeting, share a link with the research teams for testing, and set a date for PMT training.
  • § Ensure better information and data management through the PMT

o Maybe assign/hire one data manager who will be in charge of managing all AR data in the PMT and beyond § IFPRI, in consultation with the research teams, to look into CCAFE (CRP7) [[1]] to identify tools for collecting data from agricultural trials that can be adapted to AR and be embedded into the PMT § Thomas to share (with IFPRI M&E team) a summary of what he has learned during Africa RISING M&E meeting and his previous M&E experience on how best to embed a feedback system into AR’s M&E system/the PMT. § IFPRI to make the PMT more interactive by, for example, allowing users to leave notes/questions about PMT. § IFPRI to look into how best to assign unique IDs to AR beneficiary (that also addresses potential mobility of beneficiaries) § Embed a feature into the PMT to allow the M&E team track the number and type of users as well as type of information downloaded/reviewed IFPRI to draft a (PMT) data management plan

Action points from group work The following group discussions were not addressed in this session:

  • Farming systems pros and cons (Mirja Mischalchek is taking care of this and knows how to address these points - any additional feedback is welcome however.)
  • Modeling issues (farmer behavior, modeling socio economic + biophysical issues) because it's about modeling, not about M&E.
  • Ensuring that implementers and evaluators work better together, because despite one person championing this topic in the plenary group, no one came to discuss this and the topic was abandoned.

The following group topics were addressed and are documented below:

How can we ensure better information and data management (lead: Ewen Le Borgne)[edit | edit source]

  • DC-based M&E team to:

Dig out USAID (draft), DfID, CCAFS and MCC data management guidelines. See how these could be adapted to Africa RISING (looking at issues of open access, data curation) and come up with suggestions for these guidelines. Disseminate / present this proposal on data management guidelines at the upcoming program coordination team (PCT) meeting in January 2014.

  • (The ethical guidelines team) to review/update those ethical data use guidelines in the light of data recommendations made.
  • The Comms/KM team will

Update the profile of all project teams on the wiki with basic information about their thematic area and contact details on the wiki. Post the profile or interview of Africa RISING members on the website. Set up a wiki section on data management and contact details of the people in charge.

  • The respective web platform managers will extract data download data on a quarterly basis and share them with the PCT team.

Custom indicators for Africa RISING (lead: Justice Ajaari)[edit | edit source]

  • Review vital science indicators: Roseline to structure these indicators for Africa RISING and Justice/ Eliud/ Thomas/ Peter/ Beliyou/ Irmgard to review them.
  • Asamoah and Justice to propose core indicators for review and customisation.
  • Mateete, Asamoah and Irmgard to discuss at Ibadan in late November.

Integrating M&E with wider project management (lead: Beliyou Haile)[edit | edit source]

M&E team in consultation with the researchers:

  • Develop a tool to help researchers / project coordinators gather field data that can feed into e.g. the PMT

How frequently should/will data be reported? FtF: yearly reporting? Can we use a template for researchers to collect data as they go along in real time? e.g. area under improved technology (to use up as a management tool, we need to know when / how often the indicator should be measured) Training / total will be specific to the site & indicator Challenge: how process data are entered / recorded may be technology-specific

  • Look into modules to collect agrigultural trial data (www.agtrials.org) / CCAFS
  • Examine if this tool can be used in the context of AR - all plots need to be georeferenced

Web tools (lead: Melanie Bacou)[edit | edit source]

M&E team to:

  • Clarify reporting levels:
    1. Extensionists, NARS, village chair, VEO
    2. Work package leader
    3. Research team leader
    4. Chief scientist
  • Pilot mission to work with actors in group 1

Define project-specific indicators + other geo-data Test / build most suitable data entry tools Goal to reduce effort Provide training

  • Adapt PMT: clarify PMT use cases
  • Catalog relevant spatial datasets

Samoan Circle discussion about issues faced by Justice and Ainsley[edit | edit source]

  • Challenge in establishing trust
  • Reluctance to share data
  • Interests of the researchers vs. other needs for data
  • Communication and responsiveness issue
  • M&E persons are met with suspicion (but then again they only started their job recently and were introduced to a few team members)
  • It takes time to understand perspectives but it’s necessary to put ourselves in each other’s boots
  • Use ‘soft’ ownership messages – IP terms in contracts
  • We need to reach the collective understanding that M&E is integral to Africa RISING
  • Data management plan should be consistent with ethics of engagement
  • Communicate benefits to those sharing information
  • Be clear about what data is shared – define needs up front
  • Results from AR activities must be centrally stored, archived, available
  • Data products should be readable and usable
  • Set expectations in data plans: what will be available and when
  • We should aim at a policy and objectives for sharing and storing
  • Economists, agronomists M&E all have knowable data needs
  • CCA (Creative Commons Attribution) Licenses are useful for data and other outputs, stressing the attribution of our data
  • M&E is everyone’s responsibility, at all levels of the program
  • Ok but only the M&E team has budget for it so how to mainstream this?
  • More involvement is required of research teams in M&E planning and of M&E teams in research team planning (e.g. at recent Ethiopia planning meeting nobody from M&E was around)
  • More engagement is required to melt the lines between science and M&E by the M&E working group
  • Talk about M&E as abstract and removed. Define it comprehensively to include all of us
  • The “M” is what everyone does – letting the E of M&E aside for a moment.
  • Perhaps an idea to have simple materials ‘M&E for dummies’ explaining why, what, how?
  • Engagement, engagement, engagement is all that matters for this.


Thank you's and closing by Cleo Roberts


List of participants[edit | edit source]

No. First Name Last Name Organization
1 Justice Ajaari IFPRI
2 Carlo Azzarri IFPRI - DC
3 Melanie Bacou IFPRI
4 Mateete Bekunda IITA - Tanzania
5 Eluid Birachi CIAT - Uganda
6 Ainsley Charles IFPRI
7 Maria Comanescu IFPRI - DC
8 Olaf Erenstein CIMMYT - Addis Ababa
9 Aster Gebrekirstos ICRAF - Addis Ababa
10 Maggie Gill DFID
11 Zhe Guo IFPRI - DC
12 Beliyou Haile IFPRI - DC
13 Irmgard Hoeschle-Zeledon IITA-Nigeria
14 Alwin Keil CIMMYT - New Delhi
15 Jawoo Koo IFPRI - DC
16 Britta Kowalski CIP
17 Asamoah Larbi IITA - Ghana
18 Ewen Le Borgne ILRI - Addis Ababa
19 Godfrey Manyawu ILRI
20 John McMurdy USAID
21 Kindu Mekonnen ILRI - Addis Ababa
22 Shaibu Mellon-Bedi IITA - Ghana
23 Mirja Michalscheck Wageningen University
24 Annet Mulema ILRI - Addis Ababa
25 Festo Ngulu IITA - Tanzania
26 Leonard Oruko IFPRI - Addis Ababa
27 Tracy Powell USAID
28 Roseline Remans Columbia University
29 Cleo Roberts IFPRI - DC
30 Pascale Schnitzer IFPRI - DC
31 Gaitano Simiyu AGRA - Kenya
32 Emma Simmons IFPRI
33 Todd Slind Spatial Development International
34 David Spielman IFPRI - DC
35 Alemayehu Taffesse IFPRI - Addis Ababa
36 Peter Thorne ILRI - Addis Ababa
37 Eric Witte USAID
38 Thomas Wobill IITA - Nigeria
39 Iain Wright ILRI - Addis Ababa
40 Simret Yasabu ILRI - Addis Ababa
41 Fatima Zaidi IFPRI - DC
42 Birhanu Zemadim ICRISAT - Mali