Playing to the Crowd

By Tyler Pullen, M.S. Candidate, Civil & Environmental Engineering, Stanford

Perhaps the most prominent overarching challenge for holistic action against global warming is (especially American) civilians’ resistance to trust high level science and conclusions based on abstract, difficult to conceptualize data. Incredibly distant and face-less scientists from around the world agree on planet-scale changes in climate patterns that are practically imperceptible on a daily — and human — scale. And even in the case of politicians who believe and begin to understand the macro causes and effects of climate change, developing a clear sense of how to act and what goals to set based on these predictions is a leviathan task. Particularly in regards to the decision makers at a city scale, the translation of such complex, comprehensive climate models into intuitive and interactive tools is a significant barrier to productive policy proactively mitigating global environmental damage.

Challenge 1: an intangible problem

A major problem in communicating climate-related goals and effects is in the inherently abstract concept of global emissions. Not being able to see or touch or understand what a pound of carbon dioxide is, looks like, or does to our atmosphere makes it a weak motivating force. Thus, the first goal for communicating climate change issues and policies is translating the data and implications of climate change into metrics that are more relatable. Depending on the particular factor(s), this could be: the delay in minutes due to traffic during rush hour as a result of low public transit usage, the decreased visibility distance from smog, or a map of the larger flooded regions from large storms if the sea level rises. Possibly most relevant to citizens and politicians alike, however, is the simple metric of dollars. In one way or another, most consequences of climate change impacts can be described — at least roughly — in financial terms. A large scale effort to do exactly this was already undertaken by the US EPA in attempting to define the “social cost of carbon”. This could refer to the amount of property damage in the case of storms of high severity, or the rising collective cost of electricity if energy efficiency measures are not taken, or the amount of individual productivity lost for long commutes due to vehicle traffic. The effort is understandably limited due to ambiguity over the scoping of the problem and inaccuracy or unavailability of data, but it is still a laudable attempt. Because regardless of the specific intervention being discussed, putting the theoretical cost of action — and inaction — into relatable terms is critical if decision makers are to use this data to design effective policies and measure their progress.

challenge 2: visualization

With the relevant data in relevant terms, the next critical challenge for engineers and scientists is to succinctly visualize the information in ways that are digestible to those without backgrounds in advanced data analytics. Graphs, tables, and other diagrams must be accurate but concise. And more abstractly: in this process of simplification, it is paramount to not insert or exaggerate existing biases. For example, a dataset may suggest that a municipality with very low car ownership rates is pro-environmental (an image said municipality would presumably embrace), but perhaps residents in the region take a substantial amount of flights that more than offset the lack of vehicle usage. These are by no means easy considerations to balance for the eclectic causes and effects of global warming, but the process of organizing and presenting information can be as critical as collecting it in the first place, especially if they are meant to persuade and motivate action.

Challenge 3: adjustability

One of the less-explored aspects of climate change and, more broadly, data analytics, is the importance of making the models and the data presented adjustable. Even datasets used for the most specific subjects have a massive amount of variables and assumptions built in before being used to model the potential of an intervention (which is obviously critical for predicting the effect of certain policies into the future). These assumptions are necessary in order to bound the problem, make the model efficient, and make resulting predictions tenable; but they’re also inherently subjective. What equation did you use to model population projection over the next 20 years? How will this incoming population disperse geographically across multiple municipal boundaries? There are no correct answers to these questions. And though policy makers would ideally be involved from the first stage of model creation to help make these assumptions in a sensible and ethical manner, this is not always possible. It is therefore vital to allow for adjustment of major original assumptions so that it can be tested for accuracy and so that potential interventions can be modeled as well. Due to the often-enormous backend of data to support these models, however, it is very challenging to allow for these system levers to less technical users who often don’t know how to use the software supporting the analysis. Thus, building them in an interface with clear points of interaction to vary initial assumptions or change values over time could be immensely helpful in getting decision makers to understand and learn from large models by integrating them into their existing decision-making process. This could lead them to be creative with potential interventions and better assess their effects.

challenge 4: continuous updating

The last factor I consider to be central in data analysis is building within a framework that can be updated with the most recent information. Governments - especially municipal ones - don’t historically have the bandwidth or resources to sustain massive data streams over time. And even the best models, once created, are mere snapshots of a once-current state of the system being studied (especially when they use data that were already a few years old at the time of simulation). And so, however revelational they may be, they are instantaneously outdated and increasingly irrelevant as the city begins to design interventions based on its original insights. That is, unless the model is formatted to automatically update. Even the comprehensive and beautifully simplified set of visualizations provided by Shaun Fernando from PwC (who presented at the SUS Seminar this quarter), for instance, would be more chronically useful to San Jose if it was updated live or at least semi-regularly. If cities are to truly and permanently augment their decision making with holistically-managed data streams, then keeping this information current is mandatory. This is even more necessary in order for cities to be able to track the progress towards long term goals and consequently measure the effectiveness of interventions over time.

Shaun Fernando presenting PwC's work on Climate Smart San jose at the SUS Seminar on February 22, 2018.

Shaun Fernando presenting PwC's work on Climate Smart San jose at the SUS Seminar on February 22, 2018.

The overarching theme for data utilization in city management is this: we need to do better. Cities are almost-impossibly complex systems that no model can perfectly emulate, but specific ones can still be useful. For that to happen, more collaborative conversations and interactions are required for data scientists and engineers at large to align their work with the need cities have for more informed decision making.