Cross-posted from blog.ushahidi.com
As Ushahidi ethnographer, my job is to do on-the-ground research on users’ experience with our technology in particular contexts. Something that we’ve been thinking about a great deal as we develop SwiftRiver is the process of verification, the ways in which technology and society work together to create useful, trustworthy and actionable information, as well as where the technology in particular contexts might be failing.
With over 20,000 installations of Ushahidi and Crowdmap since January, 2009, Ushahidi has been used in a number of different contexts – from earthquake support in Haiti, to reports of sexism in Egypt, to election monitoring in the Sudan. In each of these cases, a map is publicized and individuals are encouraged to send reports to it. The process of verifying information reported by the crowd has taken on a variety of different forms depending on the needs and affordances of the environment and the community supporting it.
The memo I just published on scribd introduces the concept of verification, how it has evolved at Ushahidi and in sample deployments, alternative ways of thinking about verification and some suggestions for further research. Its goal is to inform developers and designers as they develop the next generation of Ushahidi and SwiftRiver software to meet the needs of our users rather than prescribing what should be done.
Ushahidi support for verification has until now been limited to a fairly simple backend categorisation system by which administrators tag reports as “verified” or “unverified”. But this is proving unmanageable for large quantities of data and may not be the most effective way of portraying the nuanced levels of verification that can practically be achieved with crowdsourced data.
What research needs to be done to test verification alternatives so that Ushahidi and Crowdmap deployers are provided with due diligence tools that can advance trust in their deployments? Can we do this in a way that doesn’t add any new barriers to entry to those who need to have their voice heard on Ushahidi? How can we ensure that this solution is as close as possible to the needs, incentive systems and motivations of deployers and users? What is the next step for Ushahidi verification?