This post describes a disruption that is under way in humanitarian related information management and processing. It describes the role that both crowds and clouds will play in this disruption that will lead to better ability to handle crisis.
“There can be no deep innovation without an initial period of deep disruption”
Introduction
The January 2010 earthquake in Haiti was a turning point for humanitarian related information management and processing. Being the first major natural disaster since the explosion of social media, it allowed people from around the world to for the first time share information in real-time with each other and with organizations involved in the response. Urban Search and Rescue teams searching through the rubble for missing people would get contacted via social media about their efforts with information about those missing and in return were able to provide back to these relatives and friends accurate and detailed information about the rescue efforts. At the same time citizens were reporting locations of collapsed houses, camps of displaced people and medical facilities. These locations were mapped onto a situational awareness map that allowed responders to get a better overview of the situation facing them. And all of this happened in an ad-hoc manner through social media. Volunteer groups were set up around the world to help develope, test and translate applications, while yet other groups were mapping the streets of Port-Au-Prince and translating messages coming in from citizens in Creole to English before passing them on to aid organizations.
When dealing with disaster response you are normally faced with two opposing problems. First one is the lack of information and the other one is the flood of information. In the initial hours following a disaster, information coming out of the affected areas is very scarce and often does not get propagated to the humanitarian response community but instead ends up inside one organization or another. At the same time media and now especially social media is providing an overwhelming amount of information that is very disconnected and unorganized. This flood of information often forces response organizations to reject it as false or unverified information. Similarly multiple organizations will start doing assessments in the affected areas, but don’t have the bandwidth to process it and are reluctant to share it with others. It is important that these paradoxes are addressed.
When responding to disasters, very few organizations have the luxury to deploy multiple information managers. Most of their efforts go into providing the actual on the ground assistance. It is however well understood and agreed that effective disaster response must be well planned and must be built on actionable information. We however way too often see implementation plans by organization based on their “gut feel” or “word of mouth” on where the situation is worst. Humanitarian organizations have attempted to come up with rapid assessments for identifying where to put their efforts, but most of those “rapid” assessments are over 10 pages long and take forever to process. A small effort is under way to do joint-assessments by multiple organizations to get away from assessment-tiredness of the affected population, which seems to be asked the same questions 10 times before any help arrives. It is therefore important that we rethink the way we assess the situation in the field and how we process the information we receive.
While a few years back connectivity would be lost for weeks following a natural disaster, we now see mobile phone companies get some basic services such as text messaging back up and running within 24-72 hours after the initial event. At the same time the ownership of mobile phones has exploded, with well over half the population of Earth owning mobile phones. Even in some of the more remote locations you now have mobile connectivity. These people are connecting via their mobile phones to social media in an ever-growing number. We must find ways of leveraging this people, their local know-how and information.
Cloud based services such as Facebook and Twitter have already made it possible for us to communicate with millions of people and to leverage our individual social networks to reach a wider audience than ever before. But right now humanitarian organizations are mainly utilizing this channel for advocacy by providing information about their activities in hope of generating funds to sustain them. Very little efforts have been made to utilize these channels for information sharing or analysis.
The Crowds
During the Haiti crisis we saw a new form of humanitarian response, the crowd response. Through a few but strategic social networks a set of volunteer crowds were established to address some of the challenging information related issues faced by the citizens and response organizations in Haiti. One of the most successful one was the collaboration between Ushahidi and InSTEDD and a few others around a solution called Project 4636. It allowed citizens in Haiti to utilize SMS to send in information and requests for assistance. Instead of relying on specially formatted text messages from citizens, they made a quick decision to rather utilize the power of the crowd to transform free text messages into structured, geo-spatially located messages. By getting volunteer groups (all formed through social networks) to give some of their time to perform those validations, geo-spatial addressing and translations they could provide situational information to humanitarian agencies on the ground. To get this done they had literally thousands of volunteers from around the globe performing this task.
We need to harness this power of the crowds and willingness of people to help out during times of need to address some of the more complicated information management issues faced by the humanitarian community. People interested in participating in these kinds of efforts on a regular basis could be trained to perform certain tasks that can then be called upon during the times of crisis. Maybe it is time for the Internet equivelance of the PeaceCorps.
The crowds can be used for more than just simple situational awareness like in the case of Haiti. The emerging field of collaborative business intelligence and analysis can easily be applied to the humanitarian space. As mentioned earlier large amounts of data are being collected both via humanitarian response organizations but also through social media. Most of that data however is analyzed beyond the simple analytics that can be done with a few minutes/hours work in Excel. Within the field of collaborative BI the people involved are split into three types, the producers, the collaborators and the consumers. By applying the concept of the crowd and utilizing the power of the internet we don’t need those to be located in the same place. The producers, most of them in the field would make the raw data available and do some basic processing on it such as enhance it, highlight important information and combine different data sources. The collaborators, most of which would be located outside of the field would remix, mash-up and re-package the data as new information solutions. These collaborators could be connected to experts for example from the academic community which would be able to guide them. Finally the consumers of the information would be donors, people in humanitarian HQs and of course the field workers themselves. They contextualize the results to make decisions and develop strategies for how to deal with the crisis.
It is important to understand especially during the initial phase of the disaster, the need for speed is greater than the need for accuracy. If you wait for all the data to come in before you make any decision people will have died before you even start delivering any aid. An example of this is whether we need 1000, 10000 or 100000 tents is more important than if the actual number of beneficiaries is 857, 9300 or 96544 respectively. This allows us to apply what has been called edge-based analysis, in which multiple and possibly conflicting versions of the truth can exist. The task of the analysis is to come up with emergent prototypes of the situation and test them quickly.
In 2008 Ted Okada from Microsoft Humanitarian Systems coined the term Big World-Small World to describe how solutions are either built with the western world (including the headquarters of the humanitarian organization) or the field (including citizens of that country as well as field workers). It is important for us to understand that solutions built for one world often do not apply in the other. Social media and the growth of mobile phone ownership may provide for us an excellent opportunity to bridge these two worlds. Through simple means like text messaging we can get information from the small world, process it in the big world and then provide feedback back to those in the small world. This feedback loop between the two worlds is important to ensure that both sides become willing participants in this endeavor.
The Cloud
As mentioned earlier the cloud has enabled some of the advances already made in crowdsourcing of tasks. But it is important to realize that the cloud must play an ever increasing role if we want to make this vision a reality. One of the key aspects of that is that we must be able to scale work efficiently up and down as demand changes. As with most things we must be able to handle the peaks yet at the same time know that most of the time activity will be almost none at all.
The use of the cloud must be threefold. First of all the cloud must be utilized to coordinate the crowdsourcing, through solutions like turks (CrowdTurks). Secondly it must be utilized to automate as much of the processing as possible and finally it should be utilized to share back the information to the consumers, whether they are in the small world or in the big world. Let us look at each one of those.
In the case of Haiti most of the effort was being done by ad-hoc groups gathering around in universities and other locations. A few, but important key people lead the effort and helped split the tasks needed to be done up into multiple steps that then could be performed by smaller workforces. At one point in the effort a system similar to the MechanicalTurk developed by Amazon was set up to coordinate the work of processing all the incoming text messages.
This coordination of work needs to be more automated. It needs to be easy for people to sign up to do individual tasks in the process from anywhere. There needs to be a way to create new ad-hoc processes on the fly, provide description of each step in the process, so people can easily learn what needs to be done and then perform that step for the time they have available. This needs to be flexible and scalable in order to be able to handle the wide variety of tasks that need to be performed and the variations in the availability of the crowd.
Secondly there is automation of tasks. As information is flowing in through channels such as social media or text messages then the overwhelming amount of raw data coming in can be high. This information may be in multiple languages (for example in Creole), yet the overwhelming majority of the people in the crowd may be English speaking. By utilizing technologies like the Microsoft translation framework the amount of time needed to perform translations can be drastically reduced. Other automatic processing such as geo-tagging, filtering, removing of duplicates, weighing of authenticity (as attempted by the Ushahidi Swift River project) and so on can be extremely important to make this possible. These automation tasks need to be able to scale up and down as the flow of information rises and falls.
Last but not least it must be easy for people to consume the information being generated. This includes the ability to visualize it both geo-spatially and through other more common business intelligence visualizations. At the same time it must also be easy for people to retrieve back the information in the form of RSS feeds and as spreadsheets. When providing access back to the small-world (i.e. the field) it is important for us to realize that those users are almost always occasionally connected back to the big world (i.e. the cloud). We must therefore provide ways through which they can both collect information but also retrieve it via means that support this occasionally connected state. This can be achieved through synchronization technologies such as FeedSync or through peer-to-peer sharing products like Microsoft Groove. We must also remember that during disasters connectivity is intermittent and costly. In most cases we are therefore not talking about direct cloud access to all the visualization products. Instead we must rely on technologies like mentioned earlier to retrieve the data and perform some of the bandwidth intensive visualizations directly on the client.
The Way Forward
This crowd and cloud based information management is not something that will be done by any single company, but rather as a collaborative crowd effort. Companies which provide cloud based services should participate by providing access to their clouds and by sharing their expertise in building cloud based services with the crowds of developers that will have to participate in this effort. Through their corporate social responsibility efforts they will get a chance to share some of their large investments in the cloud with those in dire need of assistance.
Existing collaborative efforts in this field, such as the Random Hacks of Kindness (RHoK) driven by Microsoft, Google, Yahoo, World Bank and the UN should serve as a model for a collaborative effort by the private sector, the humanitarian sector and the crowds out there willing to participate in an effort to make humanitarian response more effective and in turn save lives and reduce suffering.
Published June 24th 2010 at DisasterExpert
Leave a ReplyCancel reply