copyright notice

link to the published version in the Cutter IT Journal
June 3, 2015

Technology Abuse and the Velocity of Innovation

Hal Berghel

ABSTRACT: We introduce the concept of technology absurdism, which we define as the development of technology with inadequate consideration of potential negative externalities. We cite several examples where disastrous consequences of such development were easy to anticipate by legitimate domain experts. We argue for the position that socially responsible development requires that potential technology abuse be included in the calculation of velocity for planned innovation.

TECHNOLOGY ABSURDISM

Over the past year, the National Highway Traffic Safety Administration announced the recall of defective Takata air bag inflators in nearly 34 million automobiles (1) - the largest recall in US automotive history. After several years of study, Takata has reached a “preliminary conclusion” as of May 18, 2015 that the inflators can rupture. No news there, the victims had that figured out on impact. Takata reports that “It appears that the inflator ruptures have a multi-factor root cause that includes the slow-acting effects of persistent and long-term exposure to climates with high temperatures and high absolute humidity.” (2) (Read: they rust and don't stand up to heat.) Takata has determined empirically in their lab that .51% of the inflators in hot and humid climates will rupture. They estimate that .25% of passenger airbags deploy in the field each year. So if you're unlucky enough to own one of the recalled cars that were operated in hot and humid environments for a while, your risk of experiencing metal shard cologne may approach one in a thousand this year.

That the Takata airbags were not ready for prime time is really not at issue here. Let's analyze this recall from the point of view of product development and engineering. The analytical substance is as simple as our father's admonition not to leave his tools outside when we're done with them. Rust is not a foreign concept that is just now creeping into our technical vocabulary. For the past few millennia it has been associated with iron and moisture. Just what manner of metallurgy was Takata using that ignored the combined effects of moisture and heat on steel parts? The real story behind this recall has to do with accelerated prototyping and rush-to-market, inadequate product testing, lax oversight, a risk-benefit analysis gone awry, and a preoccupation with cost savings. All of these combined in a race to the bottom in terms of product safety involving technological shortcomings known since the iron age.

The proximate cause of the Takata recall is not too dissimilar from the recent Gulf of Mexico oil spill. On April 20, 2010 the BP Horizon exploration rig blew up. It was located 49 miles off the coast of Louisiana and drilling at a depth of more than three miles below sea level. Eleven crewmembers lost their lives, others were injured, and the largest oil spill in the history of the US resulted. In January, 2011 President Obama's National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling(3)reported that this accident was entirely avoidable and due to failures at all levels of management. But there are shades of Takata in this story as well.

A fail-safe device – a “blowout preventer” – was in place at the time of the spill. It was specifically designed to prevent what happened from happening -- management failures or not. But this blowout preventer's “deadman” system failed to deploy during a poorly implemented temporary abandonment procedure. It seems that no one had bothered to test the blowout preventer to see if it would work in this application! As a consequence 5 million barrels of crude polluted the Gulf of Mexico. The blowout preventer is the analogue of the airbag inflator!

There is another variation on technology absurdism that bears mention. This results when technology solution for a known risk is both understood and available, but is intentionally not used (usually for economic or political reasons). The delayed introduction of seat belts by the automobile industry was a product of the latter's risk-benefit analysis: it was more cost-effective to settle with the victims after the injury, than invest in seat belts to prevent the injury. A current example of this reasoning is to be found in recent resistance by the rail industry to introduce positive train control (PTC) . The recent Amtrak derailment in Philadelphia on May 12, 2015 that resulted in eight deaths and over 200 injuries highlights the dangers inherent in letting cost override safety considerations. The value PTC has been understood for decades, but weak Congressional resolve allows the industry to avoid the expense. (4) As a datapoint, the 2008 Rail Safety Improvements Act (5)that mandated that each Class I rail carrier develop a plan for PTC by December 31, 2015 (sec. 20157) and have it installed by December 31, 2018 (sec. 20156), is not likely to be enforced any time soon. At the request of the rail industry, several Senators have proposed that even the 2018 date be deferred.

Congressional response to the ever-increasing risk, combined with the growing unprofitability of Amtrak, led to a 1994 law that capped the single accident limit at $200 million or $126 million in 2015 dollars (49USC, sect. 28103 (6). This is classic political reasoning: reduce the risk to the political donor class by limiting liabilities of potential claimants. History has recorded the effect. The 2008 Chatsworth rail disaster pushed liabilities beyond this cap. The presiding judge had to lower the cash payout he calculated by 25% to fit within the legal limit.(7) From the point of view of politicians, and the transportation industry that supports them, lowering the settlement cap and delaying implementation of PTC is preferable to investing in public safety, as long as there are no criminal penalties that accrue to the transportation executives and the civil penalties remain modest. It's just the cost of doing business, much like the way moral hazards are handled in banking and finance. (8)

These examples are the segue I've chosen to introduce the concept of technology absurdism: the development of technology that either ignores, fails to appreciate, or under represents obvious negative externalities. (9). The Philadelphia train derailment is the other side of the technology absurdism coin where a technological solution to a problem is available but withheld for economic or political motives or convenience. One of the claims that I'll make here is that technology absurdism is an epidemic that needs to be addressed. The solution is neither obvious nor easy to implement, and that those of us in positions of technology leadership, or are domain knowledge experts, need to take responsibility for measurable part of problem.

DIGITAL WRIGHTS MANAGEMENT AND ENERVATION

The realm in which technology absurdism reigns supreme is information technology. Low entry level costs, easy access to computers and networks, widespread availability of high quality, a wide base of development software, and huge potential markets for inexpensive products make this the absurdist’s environment of choice for poorly thought through ideas.

My favorite exemplar at the moment is my new WiFi enabled bed. (10) I know what you're saying: sure this bed will work with Static IP, but can it work within a Class C sleep space served through DHCP. Well, yes it can. And of course, both Apple iOS and Android APPs are available for your smart phone.

 

Now I understand the allure of functional product differentiation, but I'm not seeing the unique sales proposition here. Rather, this slumber feature tells me that there are too many STEM graduates who have too much time on their hands. In this case, we'll refer to the anonymous enervators collectively as “bedwrights” and subsume the fruits of their labors under the newest form of intellectual property protection: Digital Wright's Management. By parity those who might seek to circumvent DWM shall be known as hacklers, as in “She's being prosecuted for hackling into the CEO's Web bed.” What is the appropriate boudoir information security policy? Would porting over the default policy from the family room be considered an egregious breach of our trust model? Inquiring minds want answers.

Surely one of the most egregious breaches of digital best practices as well as truth-in-advertising was the recent TRENDnet IP Security Camera fiasco. (11) According to the Federal Trade Commission Complaint (12)TRENDnet's SecurView IP cameras were never all that secure. Specifically, “Respondent has engaged in a number of practices that, taken together, failed to provide reasonable security to prevent unauthorized access to sensitive information, namely the live feeds from the IP cameras,” including transmission and storage of login credentials in cleartext, failed to respond to user and third party vulnerability reports, and failed to test their bundled software. By the time of the FTC complaint, hackers had posted links to 700 Internet-connected security cameras for all to see. After two years and extensive media coverage, TRENDnet patched their software. On January 16, 2014 the FTC ordered TRENDnet to introduce security protection into their SecurView product line that is consistent with their product representations. (13) The objective is not to beat up on TRENDnet – for they have wandered no farther afield of citizens privacy expectations than other high tech companies (14) - but to reinforce our point that technology absurdism in one form or another is rampant. In this case, TRENDnet failed to embrace any reasonable interpretation of industry best practices for Web video security and privacy since the days of the Trojan Room Coffee Pot. In terms of user security and privacy, the operational differences between the SecurView Web camera system and the analog baby monitors of the 1990's were purely cosmetic.

It is worth mentioning in this regard that some of us feel that the concept of Internet-enabled security camera is still not ready for prime time. One of the attack vectors exposed in the TRENDnet and related compromises is actually a TCP/IP feature, namely that IP addressable services require service banners in order to function. So called Internet banners are really only the protocol headers offered by the servers for session negotiation (e.g., protocol version supported, server-side Web software and version numbers, etc.). This information must be public because it is required for the connection to work. But these banners all too frequently give up more information than needed, such as default passwords, GPS data, and configuration settings. This applies to all common TCP/IP protocols, including those used by industrial controllers, traffic signals, nuclear power plants, and other miscellaneous componentry on our ill-conceived Internet of Things. In fact, there is a search engine designed specifically to search for internet banners: SHODAN. (15) SHODAN now searches for over 170 Internet banners in much the same way that web search engines locate HTML data. What is more, SHODAN was launched a year before TRENDnet's under-secured web cameras were sold. From any reasonable security and privacy perspective, exposing security camera imagery to the entire Internet has never been a good idea, and connecting them without verifiably robust security practices in place has been downright irresponsible for most of the past fifty years.

Be that as it may, the FTC's complaint against TRENDnet was twofold: best security practices weren't followed, and more importantly, the corporate claims of security and privacy protections were vastly overstated if not downright misrepresentations. As I write this, Omron, a manufacturer of programmable logic controllers makes the following claim of their product: “ the security risk [of using Omron PLCs] is very low. Hackers and other evildoers, when they are attempting to ‘hack' into a network, usually go through a process of Port Snooping to determine what UDP and TCP ports on a router are open and connected to a PC ( vulnerable). Standard Ethernet communication protocols are used in this process. When a router is forwarding a TCP or UDP port to an Omron PLC, the traffic is being delivered to a non Windows [sic] based operating system. This makes the PLC impenetrable to standard hacking methods .” [underscore added]. (16) The quoted analysis goes well beyond naïve and uninformed. It amounts to digital blasphemy. That this was report remains online and was reported on the SHODAN blog on February 9, 2015 should not be overlooked!

MIND MY DOTS, MAPARELLA

Let’s move from specifics to general. There are several categories of technology that invite technology absurdism. Certainly the use of RF technology whenever privacy and security is of concern is at the vanguard of this movement. (17) Examples of engendered RF mistakes include the Western Hemisphere Travel Initiative’s PASS card that exemplifies the military-industrial-surveillance complex’s penchant for technology absurdity. Another is the deployment of RFID cards and tags modeled on faith-based security standards (read: if I wish it to be secure, then, by definition, it is). (9) A third example is the development of the Wired Equivalent Privacy protocol in 802.11 WiFi. (18) This last example has the additional twist that the vulnerability was actually built into the IETF standard. As I’ve written about these topics elsewhere, I’ll suppress the temptation to elaborate here.

Another technology that is just ripe with opportunity for technology abuse is Global Positioning System (GPS). GPS distinguishes itself by offering both a security and privacy vulnerability. From the security perspective, commercial GPS is easily spoofed. (9)(19) This is easily understood if one thinks back to the Clinton Administrations elimination of Selective Availability in May, 2000. One may recall that in years' prior, accuracy was measured in tens of meters. After SA was eliminated from commercial GPS, accuracy increased to within a few meters on average. Spoofing is in effect just a way of turning selective availability back on through “satellite cloning.” It arises because commercial GPS uses triangulation based on unencrypted and unauthenticated signals. As with RF systems generally, connection is established with the strongest available signal. So any GPS signal that “spoofs” a legitimate GPS satellite signal with a stronger one can provide data that will be used by the triangulation algorithms. Todd Humphreys has demonstrated empirically that spoofing can easily produce GPS “blunders” (triangulation error measured in miles). (19) Not only was GPS spoofing understandable at the design stage, its use as a vulnerability was entirely predictable. (For this reason, the military adopted an anti-spoofing module over a decade ago.) However, that doesn't help the typical commercial GPS user. This is to say nothing about the triviality of GPS jamming where a criminal or terrorist wants to produce a crash, but isn't terribly invested as to time and place.

\Perhaps more insidious is the use of GPS dots (19) – micro GPS transponders that are about the size of a slice of a typical pencil eraser that may be used to triangulate to a position. Absent regulation, GPS dots will become inexpensive and ubiquitous in the years to come. That will result in GPS dots becoming the surveillance target of choice by snoops everywhere– governmental spy agencies, divorce attorneys, law enforcement, government contractors, criminals, and predators alike. Only in this case, abuse of such trackers will not run afoul of the FTC. To my knowledge, there is no federal statute that regulates such surveillance by non-government interests.

THE DEVOLUTION OF INNOVATION

I offer the following hypothesis for your consideration: “Gresham’s Twist on Moore’s Law” – i.e., the world’s capacity to create absurd technology doubles every 18 months, where absurd technology is to be understood in the sense explained above. Technology absurdism is unique to our post-industrial information age where the velocity of innovation has increased to the point that it is often unbridled by adequate reflection, complete context, understanding, and oversight. This was not the case in the kinetic and analog world of our parents and grandparents. While they may have lived in a Rube Goldberg world, we live in a world defined by hazards identified by George Orwell and Aldous Huxley.

It is precisely this velocity that is the cause for concern. Innovation came gradually to the industrial age. Samuel Morse's wired telegraph (1837) was separated in time from Guglielmo Marconi's wireless telegraph (1894) by over a half-century. That provided an ample temporal palette for refinement and contextualization. It also enabled society time to adapt. We note that Wheatstone's ABC character input telegraph (1840), Bain's facsimile machine (1843), Hughes' Keyboard telegraph (1855). Bain's chemical paper printer (1846), Phelps' motorized teleprinter (1880), and the message routing Telex system (1930) were spread out over nearly a century after the invention of the telegraph. That allowed each innovation to mature at more-or-less its own speed, building upon past achievements, finding its own niche, and for the most part negotiating a responsible pathway to market. Had all of these advances occurred in the same decade, technological chaos would have worked against is maturation process. In effect, that's the problem high tech innovation faces today. I like to think of this as technology devolution (in the biological sense) where there isn't time for the technology equivalent to natural adaptation to take effect. Progress is blocked because mutations take place more or less randomly, concurrently, and independently. Had this happened in biology, Darwin would have documented wildly implausible and ephemeral organisms that devolved into chaos rather than evolved into order. Biological devolution would lead from complex life forms to those more primitive and purposeless, the devolution of high tech innovation becomes regressive from useful technology platforms to those that of dubious value that may work against societies' interests. Not that this effect is intended. It is produced by errors of omission rather than commission. Society lacks the time to detect and purge the worst of the bad ideas before widespread adoption. This responsibility is left to technologists.

Unfortunately, in this devolutionary climate we have the worst ahead of us. Poorly designed vehicle telematics are easily hacked turning micro-level controls used by anti-lock braking systems into nightmarish hazards at freeway speed. RF-enabled heart pacemakers and insulin pumps invite hacking. Cell phone kill switches (now required in many jurisdictions) offer a bouquet of incentives for the criminal elements from bricking mobile devices as a barrier to evidence collection, to preventing victims from calling for help. Microtaggants abound for misplaced surveillance and invasion of privacy. Perflurocarbon scent emitters are ideal for covert tracking of the unwary. Add to that an expansion of drone space without antecedent community agreement on privacy expectations, driverless cars and robots that invite weaponization, or we might add the pedestrian Oregon Mileage-Based Gas Tax program that penalizes fuel-efficiency, and our future looks dim even by the standards of Orwell and Huxley.

With the velocity of innovation at current speeds, wherefrom are the best practices to spring? The answer is not to be found in industry for they are incentivized to accelerate the introduction of new products rather than reflect on how well they serve society. Nor is the answer to be found in a political process fueled by special interests. Higher education can certainly play a role, but only if there are courses that deal with regulating innovation as a social good, rather than racing toward it for economic reasons. If there are such courses, I haven't seen them, and they're unlikely to fit well into entrepreneurship programs so much in vogue these days. I'm not at all confident that academic leadership will rise to this challenge anytime soon. (20)

That pretty much leaves technology leadership who must include some understanding of how to identify the potential negative externalities of innovation before deploying it. In each of the examples I gave above, competent domain experts knew, or should have been able to anticipate, the potential abuses that resulted. This is indeed not “rocket science.” This is not to say that technology leadership can deflect an organization's first-to-market mentality, but they can inform and document potential negative externalities in white papers for corporate and government leaders to consider. Our industry demands more iconoclasts!

If we accept the premise that not everything we can do is worth doing (not an unreasonable assumption), the preposterousness of accelerating innovation without full consideration of negative consequences is easier to spot as an absurdity. The velocity of technology innovation needs to be throttled to the point where society can control it. And there are no external controls that are adequate to this challenge. Knowledge domain experts are the appropriate change agents lest the executives remain stuck on stupid. This is not Ludditism, but lucidity.

References:

(1) Safercar.gov http://www.safercar.gov/rs/takata/index.html (viewed 5/27/15)

(2) Defect Information Report 15E-043-2, TK Holdings Inc., May 18, 2015 www.safercar.gov/staticfiles/safercar/ recalls / 15E - 043 .pdf (viewed 5/27/15)

(3) DEEPWATER: The Gulf Oil Disaster and the Future of Offshore Drilling, Report to the President from the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, January, 2011. http://www.gpo.gov/fdsys/pkg/GPO-OILCOMMISSION/pdf/GPO-OILCOMMISSION.pdf .

(4) Flegenheimer, Matt et al. Amtrak Crash Illuminates Obstacles to Plan for Controlling Train Speeds.” New York Times, May 18, 2015. http://www.nytimes.com/2015/05/19/us/amtrak-crash-illuminates-obstacles-to-plan-for-controlling-train-speeds.html?_r=0 .

(5) Federal Rail Safety Improvements. Public Law 110-432-Oct. 16, 2008; 122 STAT. 4848. https://www.congress.gov/110/plaws/publ432/PLAW-110publ432.pdf .

(6) 49 U.S.C. 28103 – Limitations on Rail Passenger Transportation Liability. http://www.gpo.gov/fdsys/pkg/USCODE-2011-title49/pdf/USCODE-2011-title49-subtitleV-partE-chap281-sec28103.pdf

(7) Williams, Carol J. “Compensation Determined for Metrolink Crash Victims. Los Angeles Times. July 15, 2011. http://articles.latimes.com/2011/jul/15/local/la-me-0715-metrolink-damages-20110714 .

(8) Berghel, Hal. “The Future of Digital Money Laundering.” IEEE Computer, Vol. 47, No. 8, August, 2014, pp. 70-75.

(9) Berghel, Hal. “Noirware.” IEEE Computer, Vol. 48, No. 3, March, 2015, pp. 102-107.

(10) Tempur-Ergo Grand Complete Reference Guide. Tempur-Pedic Management, LLC. 2013. https://www.tempurpedic.com/assets/pdfs/TEM-TEMPUR_Ergo_Grand_Owners_Manual_MAR_15.pdf

(11) Hill, Kashmir. “ Camera Company That Let Hackers Spy On Naked Customers Ordered By FTC To Get Its Security Act Together.” Forbes. September 4, 2013. http://www.forbes.com/sites/kashmirhill/2013/09/04/camera-company-that-let-hackers-spy-on-naked-customers-ordered-by-ftc-to-get-its-security-act-together/ .

(12) U.S. Federal Trade Commission Docket C-4426, In the Matter of TRENDNET. Complaint. January 16, 2014. https://www.ftc.gov/system/files/documents/cases/140207trendnetcmpt.pdf .

(13) U.S. Federal Trade Commission Docket C-4426, In the Matter of TRENDNET, Decision and Order, January 16, 2014. https://www.ftc.gov/system/files/documents/cases/140207trendnetdo.pdf .

(14) Maass, Peter. “Your FTC Privacy Watchdogs: Low-Tech, Defensive, Toothless. Wired. June 28, 2012. http://www.wired.com/2012/06/ftc-fail/all/ .

(15) SHODANhq.com website.

(16) Hughes, Jay. “How Safe is Allowing Remote Access to Omron PLCs Via the Internet and How is it Accomplished?” 2009 Omron Technical Report. https://echannel.omron247.com/marcom/pdfcatal.nsf/0/7CC1E9D8D2A1C3BF862573760063920C/$file/InternetAccessToPLC_whitePaper_en_200910.pdf .

(17) Berghel, Hal. “RFIDiocy: It's Déjà Vu All over Again.” IEEE Computer, Vol. 46, No. 1, January, 2013, pp. 89-92.

(18) Sthultz, Michael, Jacob Uecker and Hal Berghel. “Wireless Insecurities.” In Marv Zelkowitz (ed.), Advances in Computers, Vol. 67, Elsevier, pp. 225-251, 2006.

(19) Humphreys, Todd. “The GPS DOT and its Disconents: Privacy vs. GNSS Integrity.” Inside GNSS. March/Aprikl, 2012. pp. 44-48. http://www.insidegnss.com/auto/marapr12-Humphreys.pdf .

(20) Berghel, Hal. “Borderline Executive Disorder.” IEEE Computer, Vol. 48, No. 4, April, 2015, pp. 112-116.