Intentional Internet of Things
This is a world in which “Internet of Things” (IoT) technologies—everyday products, devices, and structures connected to the network—are integrated intentionally, boldly, and relatively smoothly into the developed world.
While the widespread adoption of IoT technologies may be predictable in 2016, the mechanism that will propel this shift is less so. In this scenario, government will intentionally drive IoT adoption to help societies combat recalcitrant large-scale problems in areas like education, the environment, public health, and personal well-being. This will be widely seen as beneficial, particularly as the technologies move quickly from being household novelties to tools for combating climate change and bolstering health. “Smart cities” will transition from hype to reality as urban areas adapt to the IoT with surprising speed. In this world, cybersecurity will fade as a separate area of interest; when digitally connected technologies are part of everyday life, their security is seen as inseparable from personal and national security. But while this world will offer fantastic benefits for public life and reinvigorate the role of governments, there will also be greater vulnerability as IoT technologies become more foundational to government functions and the collective good.
In this scenario’s version of 2020, the Internet of Things (IoT) has moved beyond Silicon Valley slide decks and fitness and sleep-tracking wearables to become a purposefully chosen and essential part of daily life—at least in the developed world. IoT consumer devices in 2016 are still largely seen as luxury items with limited applicability—more fun than substance. In 2020, the opposite will be true. Governments will identify huge benefits to smart-designed devices. Through acts of “positive paternalism” (intentional government action designed to improve public life), governments will deploy and implement the IoT in myriad aspects of human life.
In this world, the IoT will not just mean refrigerators that automatically replace your milk when it runs out, or credit cards that vibrate every time an expenditure is charged. It will mean smartbands that diagnose health problems as they occur and dispatch medical care without human intervention. It will mean smart-metering for oil, gas, and electricity; traffic lights that automatically change based on congestion patterns; and wearable sensors—the successor to Google Glass—that help classroom teachers track whether students are paying attention. In this world, governments will be back in business as major providers of public infrastructure creating new, highly technical products that serve the public interest. The private sector will follow in kind—and a whole host of new cybersecurity vulnerabilities will develop as a result.
The driving forces behind the emergence of this “intentional IoT” are clearly visible in 2016. Embedded systems and sensors are becoming widespread. Disneyworld’s MagicBand bracelets allow park visitors to pay for items, reserve rides, order food, and get personalized experiences; they also allow the park to track visitor flows and optimize the distribution of employees, food, and other services.1 Smart-lighting networks in streets, parking lots, and malls use LEDs, sensors, and data to turn lights on and off automatically, monitor pollutants, listen for gunshots, or track traffic and even shoppers. The “Quantified Self” movement, once dismissed as a geek hobby, is demonstrating that individuals can use sensors to self-track meaningful, actionable health data about themselves.2 On the government side, the US Department of Transportation is developing models for an internet-connected road and vehicle ecosystem—a wireless communications network that connects cars, buses, trucks, trains, traffic signals, smartphones, and other devices in order to improve safety and traffic flows and create more environment-friendly transportation options.
All the ingredients are in place. But what is not yet clear in 2016 is where the breakthroughs that will define the IoT for the next decade will emerge. Will it be large private-sector actors, pressing forward with a General Electric-type vision of an industrial internet?3 Will it be an IoT driven forward by law enforcement and the intelligence community’s desire for granular surveillance?
In this scenario, it is neither economic productivity nor national security interest, but rather a “public good” IoT that pulls ahead and dominates the landscape. This is an IoT in which governments (with private partners) drive the adoption of new technologies designed to improve the lives of communities, including by upgrading their critical infrastructure systems. It is this vision of the IoT that garners the most resources and the most attention—and sets many of the key technical, economic, and regulatory terms for the IoT overall.
Through acts of “positive paternalism” (intentional government action designed to improve public life), governments will deploy and implement the IoT in myriad aspects of human life.
The public will become comfortable with granting even more access in the name of public progress, in part because the benefits will be more transparent...
Smart cities in high-tech, high-control places like South Korea (the city of Songdo) and the United Arab Emirates (Mazdar) will be early indicators of this shift, as they implement more expansive visions of the IoT to combat problems inherent to dense urban living. In the next several years, planners of these cities will argue openly that human behavior could and should be “managed” by IoT applications in order to more effectively deal with the social, economic, and environmental challenges of city life. Importantly, such cities will be in a position to make that argument without the negative valence regarding surveillance that accompanies similar arguments in the Western world.
The real shift toward widespread IoT adoption would happen when governments in the United States embrace this new model in a more focused manner, probably as a response to urgent public needs. For example, California governor Jerry Brown might in 2017 announce a massive state investment in IoT technologies to respond to the state’s drought and water crisis. Sensors would be installed in rivers, dams, farms, groundwater, water districts, sewers, businesses, and homes, coupled with water-regulating instruments and on-demand water recycling devices, to create an IoW (Internet of Water) network that would provide precise data to the state and more effectively manage the incentives for citizens and businesses to conserve.4 Releasing this data into the public domain would create a vibrant market for private companies to build and sell new services and devices linked to the system—assuming, of course, the state of California is willing to restrain from over regulating it.
A massive, high-profile IoT initiative like this might very well gain broad public support as a “positive paternalist” action, the benefits of which overshadow vague and hypothetical concerns about privacy. Supporters will argue that, during a severe drought that threatens California’s fundamental sustainability as a society, how much water a home or business uses can no longer be considered a private matter, any more than an individual’s vaccination status can be considered a private matter during a severe epidemic.
In 2016 there is substantial willingness to accept the idea of government accessing vast swaths of private data in the name of counterterrorism surveillance. In this scenario, the public will become comfortable with granting even more access in the name of public progress, in part because the benefits will be more transparent, representing the creation of a public good that people can see and experience, as opposed to preventing a public ill that by its nature is invisible.
If the California “Internet of Water” begins to generate significant reductions in water use even during its first year or two of deployment, the notion of an “intentional IoT” will have gained a major foothold inside the United States. The benefits of this shift would be almost irresistible, and similar movements toward intentional IoT would follow in the rest of the developed world. At the 2018 UN Cyber Summit in Hong Kong, international standards for the storage, transmission, and encryption of IoT data might be consolidated, as the Gates Foundation and Chan Zuckerberg Initiative announce new low-cost, global “megaband” wireless networks to facilitate further IoT adoption. In 2019, not only major sporting events but reserve military training and complicated surgery practice might be featured as visible payoffs from distributed, immersive virtual reality. By 2020, personal security sensors built into clothing and other accessories might provide real-time data to police in Los Angeles about possible violent outbursts. Simultaneous outrage and acclaim will erupt. But the “public good” arguments will generally win the day.
In Europe, there will be deeper ambivalence about, and more resolved public resistance to, these developments. Europeans will see the United States solving some prickly public-good problems and will be tempted to encourage their governments to follow suit. At the same time, they will fear how these innovations undermine “traditional” ways of doing things, not least because many (most?) of these devices will be developed and sold by American companies and require the adoption of “American” principles of government management (including delegation to the private sector). One possibility is that Europe will implement new and stronger privacy protocols to enable a greater degree of comfort with these technologies, thereby slowing progress relative to other parts of the world. Would this become a new front in the economic competitiveness wars?
In cities and countries that do throw in with the new intentional IoT, public-private partnerships will flourish. For example, the Alphabet Intelligent Roads Center, the US Departments of Transportation and Homeland Security, and the state of Nevada might create a joint $10 billion investment over five years to upgrade all of Nevada’s highways to new SmartRoad 2.0 standards—enabling smart cars to communicate directly with roads. The vast data made available from a large-scale public-private initiative like this would be open to public scrutiny at a micro level. Outputs from such a consortium might include fewer accidents, a reduction in carbon emissions, and a rise in road capacity efficiency—and all before the institutionalization of driverless cars. A tangible reduction in traffic jams and measurable improvements in commuting time could secure public approval for the intentional IoT in other domains.
Such successes would become the roots of a broad social movement rising around the IoT. For instance, a coalition of engineers, policymakers, and social activists might come together to promote the “Intentional by Design” movement. This movement would call for IoT technologies to move beyond last decade’s “neutral platform” notion and onto a much more positive, activist concept of IoT build-out. The difference? The new platforms would contain specific and explicit “intent” to help solve societal issues.
With public support and commercial and government commitments in place, new investments in underlying technologies that could be quickly deployed will spawn a positive feedback loop where (at least for a time) applications would improve at an increasing rate. Low-cost sensors and mobile devices will see improved performance as the hardware foundation for the intentional IoT expands. Gains in available wireless spectrum and the adoption of new updated Wi-Fi and Bluetooth standards will make it even easier and cheaper to deploy wireless devices. The growth of distributed computing will help reduce the overhead of doing massive processing on a central server, allowing ad hoc networks of devices to engage in de facto “mesh” super-computing. Advances in and greater availability of data tools will allow engineers and data scientists to create “brilliant” devices that not only respond to their environments but reconfigure themselves within adaptive networks. The IoT might even become a driving force behind new developments in encryption to secure the transmission of data between low-power, inexpensive distributed devices.
This virtuous circle will continue for some time, and as it does, the scope and impact of the IoT will expand apace. Intentional IoT systems will be deployed in transportation, environmental, educational, health, military, and safety domains. The bolder the deployment strategies, the more compelling the results. Imagine a 2020 finding that vehicle accidents among people owning IoT cars have decreased by 36 percent, or that following the implementation of IoT Star5 refrigerators, the percentage of overweight Americans has stabilized (or even fallen a few percent). Or imagine that graduation rates for the first high-school class using IoT education systems increased by 7 percent. These developments would plausibly create a (much-needed) boost to overall economic growth in the United States. If US GDP were to jump 4 percent by the end of the decade, tied at least in part to IoT deployments, could the intentional IoT be seen as doing what the Federal Reserve and other central banks could not do—provide the antidote to a decade of secular stagnation?
New platforms would contain specific and explicit “intent” to help solve societal issues.
The intentional IoT in this scenario will for many fulfil the promise of new technologies. After all, this vision aligns with what idealists of the early internet era (indeed, even of the Homebrew Computer Club era ) believed digital technologies were supposed to achieve for people and societies. The winning argument might be simply that “wicked problems”6 like climate change and public health crises are too important—and have proved too hard—to solve by other means. These critical public-good “use cases” will drive and justify the investment (and risk) in the ambitious deployment of the intentional IoT. In this world, a large-scale IoT will have significant effects on nearly every aspect of people’s daily lives.
There will be enough delightful and meaningful experiences with the new IoT, from the profound to the mundane, to keep most people optimistic. Seamless personalized services foreshadowed at places like DisneyWorld will become normalized and expected in many areas of life, including (to a surprising degree) in government services and healthcare. Devices will automatically send payments to other devices. Interpreting parking signs will become a thing of the past, as cars will know exactly how much to pay. Starbucks will create the Select SmartCup, a special IoT-enabled reusable cup that lets customers skip the line and head for a special machine that automatically creates a custom drink to their distinct taste (and automatically pays for it, of course).
The quality of services will differ dramatically for people unable to access the IoT compared to those who do have access.
Behind these headline gee-whiz stories will lie a deeper and more profound shift in social attitudes toward digital technologies. The ambivalence that in 2016 many people feel about the digital revolution will fade into the background (again) with this new burst of benefits that puts the IoT front and center in daily life. As happened with the World Wide Web during its first few “real” years, the IoT will become the focal point of public conversation. Academics will analyze and compare how countries use the IoT, shifting the comparison away from welfare-based vs. market-based forms of capitalism toward segments based on breadth and depth of IoT applications. Leading public intellectuals and political theorists will examine other dimensions of intentional IoT use, such as public vs. private implementations; whether these technologies generate greater benefits for labor or capital; and how much they cater to individual, communal, or societal problems. These will be seen not as speculative or marginal discussions, but rather as cutting-edge debates about a new technology horizon.
As always with digital technology, the most immediate and vehement counterarguments will come from privacy advocates raising the alarm about potential harms. But for the vast majority of people, the IoT’s benefits will outweigh concerns about mostly hypothetical risks. The American middle class in particular will aspire to use IoT technologies to “regain control” over health, family, work, and education. Much in the way that smartphone users today are willing to expose geolocation and identity data for the convenience of using top apps, in 2020 middle-class users will be willing to trade away even more information about themselves for an IoT-enabled lifestyle. For most, this choice will not even be perceived as a tradeoff.
At the same time, aspirations for IoT technology will not quite be matched by reality. New types of inequality will arise quickly and with possibly savage consequences in this world. The quality of services will differ dramatically for people unable to access the IoT compared to those who do have access. While the percentage of Americans not meaningfully connected to IoT systems would likely fall below 10 percent by 2020, that unconnected population (mostly those living below the poverty line and in rural areas) might see the quality of their public services corrode even further. Insurance costs will rise for people who are unable to buy personalized health devices, retrofit their homes with IoT appliances, or access new smart cars. People living in areas that lack IoT sensor deployment will suffer as cities and states increasingly adopt data-driven investment and maintenance practices (foreshadowed in 2014 by problems with a crowd-sourced pothole detection app in Boston).7 Some of the disconnected will lose the ability to find fruitful work in the IoT-enabled economy: driverless cars, automated machinery lines, and electronic personal assistants will leave lower classes competing for increasingly scarce service jobs.8
These labor market effects were coming in any event, but in this world, the IoT will become a convenient locus to place the blame. Many groups that initially opposed the intentional IoT because of surveillance concerns would likely shift their focus toward measures that aim to alleviate new types of inequalities, particularly those around jobs.
Other industries that will be deeply affected by this shift, such as healthcare and education, will face a different problem: how to reap the benefits of the IoT without giving away the most important parts of their value chain and thus ceding market power to IoT companies. In 2016, some large healthcare and hospital firms are already developing their own IT systems, patient apps, etc., precisely to avoid tech company monopolies. By 2020, retail companies and large networks of schools may be doing the same. The smaller fish in these ponds will face more difficulties in matching these parallel IoT initiatives. For them, the choice is most stark: either lose a critical point of control in their business models or drop out of the race for the IoT altogether.
The Internet of Things will also become a part of consumer-dependent industries in new and innovative ways. Consider clinical drug trials: in 2016, most participants find their way to trials by word-of-mouth and lengthy screening processes. In 2020, the IoT for Clinical Trials will replace these informal and highly inefficient networks. Patients will be contacted about their eligibility for trials through automatic electronic screening systems and will be able to participate remotely using data already being captured through their personalized health IoT systems. Pharma companies could see a huge burst in new therapeutics being approved as a result.
The shift in attitudes toward the intentional IoT would be a boon for technology-first sectors that focus on automation and robotics. In fact, robots could come to be seen as the “next big step.” Particularly in areas such as transportation and logistics, it might become increasingly legitimate to argue that “the more autonomous the robot, the better the outcome for humans.” The CEO of Toyota might quote Abraham Lincoln in a keynote speech at the 2020 “Internet of Things World Conference” (which would have by now replaced RSA as Silicon Valley’s preeminent information security conference), referring to the IoR (Internet of Robots) as expressing “the better angels of our nature.”9 Today’s big internet companies—the Googles and Apples of the world—will increasingly focus on developing devices that have physical actuators, whether or not they label these as robots.
The ICT4D community (information and communications technologies for development—a social movement aimed at bridging the gap between technology and community development) would likely come to see the intentional IoT as a central new part of its approach, though, as in the past, there will be a variegated mix of successes and failures.10 ICT4D projects might well experiment with the use of blockchain technology for the transfer of IoT data, as foreshadowed by current IBM and Samsung projects.11 This would allow devices to communicate directly and reliably with one another in a decentralized system, reducing overhead and lessening the need to build large internet infrastructures in geographies that do not already have them.
Particularly in areas such as transportation and logistics, it might become increasingly legitimate to argue that “the more autonomous the robot, the better the outcome for humans.
As a result of these new IoT-enabled problem-solving approaches and efficiencies, the perception that “government cannot get anything done” will begin to drop out of public rhetoric.
In this scenario, the intentional IoT will become a critical policy lever for governments. The main policy debate will be not about whether we should use the intentional IoT to address governance and policy challenges. Rather, it will be about how the intentional IoT should be implemented, and whose intentions will be programmed into the system. The same debates that have swirled around digital technologies for 20 years—who makes design decisions and how laws and regulation should interact with engineering and design—will find their way into intentional IoT debates. Given the public interest in speeding the adoption of IoT technologies, governments will feel pressure to act much more nimbly than they have in decades past.
Federal, state, and municipal governments alike will see the IoT as a way to break logjams and get more done. A diversity of new and ambitious initiatives will result: in some cases, multiple actors will compete in the same domain; in others, stretch initiatives will fail to live up to their potential (think Boston’s “Big IoT Dig” starting in 2019). And in still others, governments may deploy technologies before they are ready. In the United States, new investments in the IoT energy grid will bring the country significantly closer to a national smart grid. In other domains, such as immigration, approaches and results will be more controversial. Will there be a real employer verification system? A virtual wall? Smart identity cards? IoT technologies will make all of these feasible but not any easier to agree upon.
The implications for citizens’ day-to-day lives could be sweeping, but perhaps the most significant impact will be on government itself. As a result of these new IoT-enabled problem-solving approaches and efficiencies, the perception that “government cannot get anything done” will begin to drop out of public rhetoric. The public will reap the benefits of IoT systems, and even be willing to pay taxes(!) to expand their impact. For the vast majority of public systems, this is great news. For systems that have grown to be dependent on the “fuzzy edges”—employment of undocumented immigrants in agriculture, for instance—the effects will be more mixed, with significant unforeseen consequences. When government fails to take enforcement action, the reason will no longer be incompetence. It will be, or at least be understood as, a purposeful choice.
Will the IoT be a global network? Probably not, as China’s “Great Firewall” would most likely be extended to IoT devices, made and programmed by Chinese companies and mostly inoperable with Western IoT devices. The Chinese government will see massive value in the intentional IoT to improve citizens’ lives and monitor the actions of potential dissidents—but it will be wary of American IoT devices and software that might be used to empower dissidents to connect and communicate with one another or to “monitor” Chinese economics and politics from abroad. Concerns about “backdoors” and hardware built abroad with deliberately engineered defects will limit the readiness of autocracies in the developing world to import large numbers of foreign-made devices. The United States will share these concerns, and would probably become equally wary of imports. This would be another driving force pressing toward re-nationalization of at least some technology production and the possible emergence of nationally based, conflicting standards for device communication and interoperability. One can imagine that the Trans-Pacific Partnership trade agreement would have to add an IoT codicil by 2020.
For smaller countries not able to access economies of scale at the level of China or the United States, the choice will be framed as one between economic and monetary spheres of influence: join the US IoT, the Chinese IoT, or try to go it alone? Countries like Singapore that are already oriented toward a strong paternalistic state would find they also have an interest in using the intentional IoT for purposes beyond monitoring and surveillance, to nudge behaviors in ways they believe are positive for their societies. Surveillance (somewhat ironically) might become less noxious, as the mix between empowering state control of individuals to aid state power and improving the economic and social conditions of people tilts more toward the latter. Would countries like Qatar or the UAE become leaders in developing and fine-tuning this mix, deploying sensors in every roadway and car? In this world, Doha and Dubai could leapfrog Las Vegas to become the first truly smart-road cities in the world.
This combination of fascination with potential gains and anxiety about “national” technologies in the context of the IoT will also emerge as a transatlantic issue. Americans will expect Europeans to be as enthusiastic as they are about the new technologies; everyone has smartphones, after all, and do European consumers really miss Nokia? Given that Europeans are believed to be more trusting of “the state” than Americans, there is the possibility that adopting the IoT for the public good will be a very attractive argument in some countries. But many Europeans will be ambivalent and resistant, given privacy concerns and the changing role of government. This could be particularly important if American firms get aggressive about promoting their products and run roughshod over concerns (justified or not) about “too much data flowing back across the Atlantic into Silicon Valley.”
Cybersecurity of Things
In this intentional IoT world of 2020, there will no longer be an “internet and society” discussion; there will simply be a “society” discussion, as the internet fades into the ubiquitous background. And because digital technology is now present in almost every domain as part of the intentional IoT infrastructure, the term “cybersecurity” will feel dated. Cybersecurity will just be “security,” seen through the lens of traditional domains. IoT devices in the home will be in the realm of personal security; smart infrastructure and government-run systems will be part of national security; sensors and devices to deal with climate and energy will be a dimension of environmental security; and so on.
Technical expertise will be critical to all these domains, but the “cybersecurity specialist” model of years past will give way to a wider suite of skills that technical experts need to get systems running and keep them in working order. Preventing attacks and creating defenses will be as important as domain expertise, whether in the education, financial, or healthcare sector. The technician who visits your home to repair a washing machine or the airline mechanic who steps onto your plane in 2020 will have what in 2016 would have been considered pretty significant “cybersecurity” training. Basic device security might become a core part of the standard university or professional school curriculum.
Encryption will increasingly be built into most intentional IoT systems components by default. One of the challenges will be pushing out updates and patches on what might be a very frequent schedule, at the scale of billions of devices. With faster product development cycles, people and organizations will have to contend with many rapidly outdated IoT devices, as well as the burden of legacy devices that are still operational but no longer receiving updates, or that are no longer technically capable of implementing new encryption or other security systems. Non-digital companies—from Lenscrafters to pipe manufacturers—will suddenly be tasked with putting sensors into their products; since such companies will have limited experience with computer security, their products are particularly likely to be vulnerable. The rapid rush to deploy the IoT will compound this problem, leading to security sloppiness that will be very hard to audit, much less clean up.
Preventing attacks and creating defenses will be as important as domain expertise, whether in the education, financial, or healthcare sector.
High-end criminals and ambitious terrorists will focus their attention on the most serious cyber-physical targets, such as critical infrastructure. Terrorists in particular will seek to undermine the growing confidence in Western governments created by the intentional IoT; ISIS and similar groups or their successors will see this confidence as an existential threat to their message and the political order they are trying to create. Plausibly, the IoT would replace the airplane as the nexus of terrorist attentions.
To access key targets, attackers will continue to seek vulnerabilities in outdated systems as an entry mechanism into more sensitive attack points, as they often do in 2016. But there will be more such “unaudited” interdependencies in 2020. To attack Google’s digital suite of service providers, a state actor might jump from traffic lights to the operating system of vehicles to the servers that manage traffic databases, and from there to Google’s robot operating systems.
Large state actors will similarly try very hard to penetrate one another’s core systems (much as they do today). But the stakes will be much higher in the 2020 intentional IoT world, because the possibility for a truly catastrophic attack will be significantly higher. These pressures will likely create an anxious state of deterrence equilibrium between world powers (the United States, China, and Russia). “The threat that leaves something to chance” had to be engineered into the nuclear deterrence world of the second half of the 20th century to enhance stability, but it will naturally be part of the IoT world due to the layers of complexity in relevant systems. Whether this comes to be perceived as a new “mutually assured destruction” equilibrium that creates a kind of strategic stability, or a very tense “first-strike advantage” environment that could be highly unstable, this dynamic could become one of the most important uncertainties that the major power states will confront. For smaller states, the choice may be reduced to picking sides by assessing security risk as much as—or more than—traditional political leanings. If China is seen as providing better IoT security than the United States, will Turkey or India throw in their lots with China instead?
Despite states’ best effort to engineer against them, attacks and failures will still occur, sometimes at a large scale. Imagine that the smart traffic control system of Mumbai is attacked, causing cars to drive into one another and killing 1,000 passengers in minutes. Or a chemical factory’s systems could be hacked, contaminating water sources for several towns in France. Would these be turning points? In this world, probably not, as long as single failures do not cascade into systemic failures. As with accidents in socio-technical systems of the past— plane crashes, E. coli outbreaks, or defective airbags—the media will pay close attention, but most people will continue to use these systems because they do not see an alternative. Failures that occur with the intentional IoT are likely to seem similar. Investigations will occur, new rules will be put into place, and consumers will be made aware of preventative measures they can take—but the overall system will march on.
For lower-level criminals unable to infiltrate the most highly protected systems, new types of attacks might focus on intentional IoT algorithms. Micro-attacks will try to alter such algorithms in small, seemingly undetectable ways. These changes will often be invisible until the results—which can take time to manifest—become widely visible. Consider a system that monitors the drinking habits of individuals genetically predisposed toward alcoholism. If an attacker could manipulate the algorithm so that a few more drops of alcohol can be consumed each day, the attack would likely go unnoticed until the individual lapsed into alcoholism. Or imagine a slight retuning of a million engines in gas vehicles resulting in an almost undetectable increase in gas consumption, which would in turn raise oil prices by one penny per barrel around the world. At scale, these kinds of manipulations could become the modern version of the mailbox “lottery scam” for financially motivated criminals.
But it might not only be criminals who find this sort of attack interesting. Analogous manipulations may come from those who are disadvantaged by the growing IoT-enabled sense of inequality. “Domestic” disruptors and terror groups will try to bring systems down in dramatic fashion in order to call attention to their dissatisfactions. Other attacks might come from within the corporate sphere itself; someone who controls a counterfeit statin drug factory might want to manipulate eating and exercise behaviors in an unhealthy direction so as to spur demand for their (counterfeit) product.
Attacks will also focus on new targets whose “expected” behaviors are not yet fully understood.12 As machines get incrementally better at imitating human judgment, this will enable hackers to target attacks at individuals by working around the edges of what machines can and cannot do. Take what some call the “Internet of Money,” created by the many devices with access to individual financial information. The refrigerator that orders your milk has your credit card information, and so do enough other IoT devices that most people will not actually know where their payment data is stored. If a large number of these devices were attacked at scale for tiny amounts, the financial gains could be significant. Information collected by IoT devices on the body could also be a key vulnerability. Would hackers use changes in Fitbit data to predict pregnancy or mental disorders in particular individuals, and threaten to disclose such information to prospective employers unless a bounty is paid? The possibilities for IoT ransomware would expand apace.
...the stakes will be much higher in the 2020 intentional IoT world, because the possibility for a truly catastrophic attack will be significantly higher.
Maintaining the security of the IoT’s “supporting” infrastructure will be critically important...
The public will demand a nearly unachievable level of coordination among various partners in the sprawling IoT ecosystem in a call to improve overall security. Protecting the integrity of one’s home by keeping device software up to date will require partnership among a large number of players. Updating software would probably continue to be the individual’s responsibility in most cases, but companies providing home services (such as utility companies) would also be responsible for (and see a commercial opportunity in) making sure the technology is installed and updated. There will be many gray areas that allow problems to slip through the cracks. For instance, some people will believe that it is the water company’s responsibility to inform residents if a leak is detected in the house, but others will contend that individual residents are responsible, given they have real-time access to water usage data.
At a national level, governments will be focused on the now much larger task of protecting societal-level intentional IoT systems, particularly critical infrastructure, including smart roads, dams, and power grids (although there will still be strident debate about what constitutes “critical” infrastructure). Maintaining the security of the IoT’s “supporting” infrastructure—wireless spectrum, materials, and supply chains—will be critically important in this world, both for national security and for business and industry security. For example, systems might be built to block, jam, or spoof wireless communications. These can be used offensively (e.g., jamming communications between autonomous vehicles) or defensively (e.g., a building with walls that block interfering wireless signals, creating a safe wireless networking environment inside).
Given the risks, states might also ratchet up penalties for IoT hacking. Will Israel develop the IoT Defense Forces, a new military or law enforcement division designed to “protect the cyber-homeland”? More mundane moves—such as governments requiring adherence to particular system designs to “harden” the nation’s IoT systems, or state-approved “Trusted Platform Modules”—are likely, but will always come under pressure from the pro-innovation mindset that reigns in this world. What was already a large and unwieldy state cybersecurity agenda in 2016 will expand exponentially.
Governments will continue to invest in offensive capabilities, developing ways to use the intentional IoT subversively to achieve political-military and foreign economic policy ends. As is true in 2016, the line between criminal capabilities and offensive national capabilities will be difficult to define. If criminals can move prices through small market manipulations, then surely governments and militaries could do more—for example, inducing widespread water or fuel price fluctuations. The temptation to engage in increased surveillance—through televisions, refrigerators, smart meters, and devices on the body—will also be too strong for some to resist. Fights like those between Apple and the US Department of Justice over device security are likely to get even more contentious in the IoT space.13
Perhaps the greatest risk lies precisely with the greatest benefits: as communities get more networked, they will also grow more vulnerable. While smart cities and smart grids will be marketed as improving societal resilience, in another sense they may actually impede it. As communities become over-reliant on IoT technologies, they will struggle to manage even the smallest disruptions to those technologies. Ironically, then, a set of technology changes primarily driven by the state and reinvigorating its role in public life could ultimately make the state weaker and more vulnerable, all because that public life will be too dependent on IoT systems. The security stakes will go up appreciably, and it will feel like it happened while no one was watching.
The Way Forward
This is a world in which the Internet of Things shifts from aspirational to operational. Driven by governments newly able to resolve weaknesses in public service delivery, “smart” connected devices will appear in almost all facets of human life. IoT devices will create great opportunities to improve lives and service delivery, but these will be accompanied by new challenges and risks for users, operators, and innovators.
In this world, the public will not view IoT failures through the specific lens of “cybersecurity.” Rather, they will be seen simply as failures of an individual socio-technical system, or, often, the result of human error (such as when a person fails to update his/her software). Even where the technology is shown to be at fault and surprisingly vulnerable, intentional IoT ecosystems will still be seen on balance as beneficial to humanity. People will continue using connected devices, even as the stakes of security and vulnerability mount.
In this scenario, the cybersecurity research community will wish that in 2016 it had been working on:
- IoT RegulationsHow the IoT should be defined, and how it should be regulated in particular sectors (including government vs. private sector)
- CybercrimeHow cybercriminals will change their activity if the IoT becomes the principal center of value creation in many industrial, economic, and government processes
- SecurityHow to build extremely high levels of security into the IoT system, and to foresee the type of social engineering or other attacks that will arise in this system. For instance, is there a parallel to phishing in the IoT space?
- AlgorithmsAlgorithms for managing the complexity of IoT-produced data at scale, and mechanisms for processing that data not only in narrow sectors, but across all of public life
- Keeping UpHow to keep the above-mentioned research at pace with technological innovation and the increasing levels of complexity within interconnected systems