Digital transformation and artificial intelligence are sold as engines of inclusive growth. But how data is collected, stored and used can mean the difference between success and failure. Tim Dobermann and Shahrukh Wani discuss what leads to success. They write that governments in developing countries often don’t lack data. They simply struggle to use it.

Our discourse on data and technology for development inhabits two realities. On the conference circuit, digital transformation and artificial intelligence (AI) are sold as engines of inclusive growth, an applied “psychohistory” worthy of Hari Seldon in Asimov’s Foundation. On the ground, the heroics fade: siloed spreadsheets, broken dashboards and dusty servers tell of ambitions that never outlasted a pilot. Or a consultant. Can we navigate this jungle to drink from the fountain of eternal data?
Getting the basics right
Step one is not getting caught in the thicket. Much of the answer lies in starting where capacity already exists, and not working against the grain. A demand-first approach drives success. Governments in developing countries often don’t lack data—they simply struggle to use it. Countries like Zambia, Rwanda and India succeeded not because they deployed cutting-edge algorithms, but because they solved concrete policy problems for real decision-makers. They did so by embedding data analysts directly into policymaking teams, working closely with officials to address budget debates and service delivery. When reliable numbers can swing, or defuse, budget debate, data cease to be an IT commodity and become political capital.
Governance, not mathematical elegance, will determine whether algorithms serve citizens or merely dazzle ministers. Information in many lower- and middle-income countries’ ministries still lives in myriad disparate formats, from comma-separated values on a laptop to scanned PDFs on a shelf. I was once asked by a government official, months into the arduous process of acquiring data, whether I preferred to receive in ASCII format or as an Oracle dump file. Stunned at the prospect that the manna was finally within reach, I replied that any format would do. “Good to know,” they said. “I have neither.”
Making granularity work
Machine-learning models can flag tax leakages, detect illegal logging from satellite images or forecast which clinics will run out of vaccines. They cannot decree faster GDP growth. There is no recipe for growth, merely known ingredients. What the machines can do—better than most humans—is reveal patterns invisible to the naked eye. By doing so, they can help us combine our growth ingredients in better ways. Satellite night-lights expose informal urbanisation; anonymised call-records map commuting patterns; school-level test scores uncover gaps hidden by district averages. Disaggregation turns the tyranny of the mean into a guide for precision interventions.
Data, even when it is available, does not always capture what we most need. Aggregates mask the disparities of region, gender, wealth, or firm size, far reducing their value for advanced techniques. At the same time, granularity also sharpens risk. Deploy sophisticated models on fragile datasets and bias is entrenched faster than ever. Interoperability, metadata and the dreary work of documenting errors and representativeness may sound pedestrian, yet without them every new initiative risks becoming another tombstone in the graveyard of innovation.
Human capital, human judgement
Teaching officials to navigate Python libraries is less important than teaching them to ask good questions that data can answer. Not everything can, or should, be measured, and ill-planned initiatives can result in idle reams of information. Embedded analysts and multidisciplinary teams—economists beside engineers beside lawyers—help keep numbers relevant to policy. Outsiders may advise, but insiders must own.
Some governments are edging in the right direction. India’s push for public digital infrastructure has forced long-overdue debates on privacy and interoperability. Kenya’s treasury now expects line ministries to publish budget and expenditure datasets as part of routine reporting. Rwanda’s nascent AI policy insists on independent oversight. Where data reform aligns with national strategy, it tends to survive. A rush into technological solutions needs to be framed into wider strategies for growth and development.
Data as compounding capital
True reform demands patience. Time horizons must stretch to five or even ten years, not the 18-month glow of the typical aid-funded pilot. Political sponsorship is needed at cabinet level; otherwise, a reshuffle will sink the project. And, unfashionably, someone must budget for maintenance. Roads and bridges stay upright because engineers are paid to keep them so; data infrastructure deserves the same respect.
Done well, data reform turns information into public capital that compounds: decisions become faster, fairer and more adaptive; scarce revenues flow where they are needed most; citizens see results and learn to trust official statistics. Done badly, the same reform cements inequality and invites backlash.
Ultimately, sustainable reform grows where local ownership, continued capacity investment, and demand‑driven data cultures converge to make information an active lever for development—not a museum exhibit.
Unlike Asimov’s Seldon, economists and engineers cannot predict destiny. They can, however, chart a safer path through the jungle. That is no mean contribution, and it’s worth more than any number of abandoned dashboards.
Sign up for our weekly newsletter here.
- Tim Dobermann will be speaking at the LSE Festival event Data for Development, Tuesday 17 June, 6.30pm to 7.30pm.
- This blog post represents the views of its author(s), not the position of LSE Business Review or the London School of Economics and Political Science.
- Featured image provided by Shutterstock.
- When you leave a comment, you’re agreeing to our Comment Policy.