
In 2024, I attended a meeting in Indonesia on alternative measurement approaches hosted by the Global Fund for Community Foundations. Colleagues from around the world gathered to explore how we might understand, distill, and celebrate knowledge in ways that made sense to our communities.
One term kept surfacing: *logframes* — short for Logical Framework Approach (LFA). For many in the room, it symbolised a legacy of bureaucratic control and knowledge extraction, disconnected from lived realities. As someone working in community foundations in Australia, I was struck by how little we used or even talked about logframes.
So back in Australia, I started asking around. Most people I spoke to hadn’t heard of logframes. Those who had offered mixed views. Some said they could be helpful — they encourage people to plan, to clarify objectives and sequencing. But they also said that often, people end up retrofitting outcomes to match a pre-approved document, rather than doing meaningful work. I also learnt that – no one is super passionate about logframes.
That got me curious: how did this tool become so dominant? Who decided it was the right one? Was there a moment — a workshop, a conference, a decision in a back room — where it became the global norm? Was a hammer with “LFA” etched into the handle handed out with each new international aid contract?
So I went down rabbit holes.
A tool from Washington
Logframes trace back to the late 1960s, when the United States Agency for International Development (USAID) wanted a better way to evaluate its global programs — particularly to demonstrate their contribution to anti-communism and to opening new markets for American business.

In 1969 — the year of the moon landing, Woodstock, and Sesame Street — USAID commissioned a young consultant, Leon J. Rosenberg, to review its project evaluation system. He brought together ideas from military planning, NASA engineering, and systems thinking. The original logframe approach drew on three concepts:
- Program Management – holding managers accountable for results.
- Scientific Method – treating projects as hypotheses to be tested.
- Systems Analysis – recognising that each project exists within larger, interdependent systems.
Rosenberg’s work was formalised in a training manual in the early 1970s. It identified core problems in USAID’s work: vague objectives, unclear management responsibility, and adversarial evaluations. The logframe aimed to solve this by forcing clarity: What does success look like? Who is responsible for what? How can we evaluate based on evidence, not opinion? It was a logical response to a funder’s needs — and from a certain lens, a worthy one. But it was also deeply hierarchical, top-down, and technocratic.
Global uptake and enduring influence
By the mid-1970s, Canada and Scandinavia had adopted the approach. By the 1980s, it was everywhere. What began as a management fix for USAID became a global standard. In one training manual, a case study details a malaria reduction program funded in part to enable colonial expansion. It used the DDT insecticide, even though DDT had already been banned in the U.S. following Silent Spring — a book about the environmental effects caused by the indiscriminate use of pesticides. These moments highlight how closely the logframe’s logic was tied to funder priorities — often with little regard for community voice or long-term impacts.
It’s striking how enduring the logframe has been. Perhaps because many organisations were smaller and more centralised then, adoption was easier. Or perhaps once embedded in donor systems, it was too difficult to dislodge. It often feels like an attempt to nail spaghetti to a wall — to force complex, relational, adaptive work into a linear accountability framework.
Revisiting the source documents
What struck me most when reading through the original documents was how familiar the challenges still are:
- How do we define success in fluid, uncertain contexts?
- What makes a good metric?
- How do we centre shared, eco-centric purpose over institutional ego?
The logframe framed funding as a contract, not a collaboration. Evaluation was a tool of compliance, not learning. It wasn’t designed to build trust, but to manage distance. So can a tool born from control ever be used to share power? Despite modern updates and efforts to make it more participatory, the logframe still carries the weight of its origins. Perhaps now, as USAID’s influence wanes and funding paradigms shift, there is space to imagine something different.
A New Moment, A New Mandate
This moment of immense turbulence is a window for local groups to rearticulate what matters to them – and then reimagine what is measured, how it is measured, and who gets to decide.
Around the world — in Brazil, Indonesia, Ukraine, and beyond — community-rooted organisations have been experimenting with alternatives for decades. They are building measurement systems grounded in lived experience, focused on what matters locally, and designed to support reflection and action — not just accountability. If we acknowledge that the logframe was designed to serve funders, not communities, then that truth becomes our starting point. Not to reject tools, but to create new ones — fit for purpose, responsive to context, and built from the ground up.
Ben Rodgers is the Executive Officer at Inner North Community Foundation and Chair of Community Foundations Australia.