As this year comes to an end, I am grateful that it is not in my purview to predict world events for the coming year. While I did fairly well with my security predictions for 2016, I’m not sure that Nostradamus himself could have foreseen what the past year had in store for us. The geopolitics of 2016 were downright stunning, from Brexit to the improbable trajectory of the U.S. election. I was wrong on the Cubs winning it all and I didn’t anticipate the popularity of adult coloring books. And who among us saw Bob Dylan winning the Nobel Prize? So as we look ahead to 2017, I gladly return to the matter of cybersecurity, and present to you my predictions. Here's an overview; full text is below.
Ransomware gets physical: Attackers will take over and disable hardware as a way to extort money from corporate victims.
IoT bankruptcy: Companies that refuse to bake security into their IoT products will suffer financial repercussions.
You can’t legislate security...: There’s an increased call for governments to impose cyber-regulations, but is there the political will?
.…But you can insure it: As insurers get better at predicting risk, look for a flood of new offerings and startups in the cyber-insurance industry.
Offensive offense: You can expect an unfortunate increase in nation states conducting cyber-espionage for the sole purpose of embarrassing or undermining enemies.
#Hashtag journalism: Sites that provide a platform for user-supplied content will attempt to weed out misinformation without crossing the line into censorship.
Data breaches 3.0: Instead of stealing data, attackers in 2017 will seek to manipulate data, unleashing potentially dire and long-lasting consequences.
Smart home, smarter criminal: With their shoddy security, IoT devices that are making homes “smart” are also making them vulnerable.
Rise of the machine (learning): AI platforms are helping organizations sort through mountains of data to improve security; unfortunately, they’re bound to become useful to attackers, too.
Social networking bots: A boon for customer service organizations, expect more chatbots to be used on unsuspecting victims of social engineering.
Ransomware remains one of the most effective means of monetization for cybercriminals, and no single threat has driven more corporate security procurement decisions over the past year. Has ransomware peaked? Don’t count on it. Most ransomware attacks observed to date remain relatively unsophisticated, relying primarily on social engineering as the infection mechanism. Why? It works. Attackers don’t need to pull zero-day tricks out of their bags to infect PCs when signature-based defenses are easily evaded and humans remain gullible. None of that has changed. But the targets the attackers are going after are changing.
In my 2016 predictions, I stated that attackers would begin targeting corporations and demand far higher ransoms — that has certainly happened. So what’s next? The vulnerable state of IoT devices is finally front and center thanks to the Mirai botnet DDoS attacks. Expect ransomware authors to next train their sights on Internet-enabled hardware devices. This phase of ransomware will be different. The threat of encrypting the victim’s data will be replaced with extortion by way of disabling physical systems. Corporations are all too willing to pay ransom demands when valuable intellectual property has been locked up. The silver lining in the current generation of ransomware attacks is that enterprises are finally taking data backup seriously and upping their security game with next-gen endpoint protection — two defenses that will do little to protect vulnerable IoT devices. If enterprises are willing to pay ransom to retrieve valuable data, how much will they be willing to shell out when an assembly line or manufacturing plant producing millions of dollars of goods per day is brought to a grinding halt?
We’ve only seen the tip of the iceberg when it comes to IoT security, so that topic makes its way into two predictions this year. The Mirai botnet was a shot across the bow; a wakeup call for the hardware industry in the same way that Code Red and Nimda were a wakeup call for the software industry some 15 years ago. Those fast-spreading worms shed light on the catastrophic economic damage that awaited software vendors if something didn’t change. Vendors feared that they could be held liable for the damage done by the vulnerable systems they produced, and they feared that if they didn’t self-police, the government would step in and do it for them. Bill Gates recognized the threat and ultimately sent out his famed Trustworthy Computing Memo, which fundamentally changed the way Microsoft developed software, baking security in at the beginning rather than brushing it on at the end.
For some hardware vendors, Mirai will serve as a battle cry, while for others —those asleep at the switch — it will be their death knell. The hardware industry faces a much steeper challenge, as insecure devices often cannot be fixed via firmware patches, either because there is no mechanism for an upgrade to be applied or because it simply wouldn’t be installed even if made available (when was the last time you patched your TV)? The financial repercussions of Mirai have already been felt by Chinese manufacturing firm Hangzhou Xiongmai Technology, which was forced to recall the vulnerable OEM webcams that fueled the botnet. It is often the case with IoT devices that the corporate logo on the device does not represent the company that developed the underlying technology. It is common for competing products to leverage the same underlying web server/client or hardware components developed by a single entity. As such, when a vulnerability is discovered, it can impact hundreds of unique products. When such a technology represents the lion’s share of an OEM’s revenue, a major recall combined with the threat of lawsuits creates the perfect storm to put the company out of business. Expect to see such an example in the coming year.
In 2016, we saw a number of calls for the government to step in and force certain practices within the security industry. The most high-profile demands came in February 2016, when Apple objected to an order issued under the All Writs Act, which would have compelled Apple to provide the FBI with access to an iPhone 5c that had been used by one of the perpetrators in the San Bernardino terrorist attacks. Apple argued that doing so would require creating an entirely new and insecure version of the iOS (Apple’s mobile operating system), which would in turn put consumers at risk. The FBI ultimately withdrew the demand when it obtained access to the phone using a vulnerability provided by a private security firm, for which the FBI allegedly paid more than $1M. That transaction only put the issue on the back burner to simmer a little longer as the backdoor vulnerability provided was specific to older iPhone models.
The cry for government intervention in cybersecurity also grew louder following the Mirai botnet attack, with many leaders, including noted cryptographer Bruce Schneier, calling for the government to step in. The challenges with such intervention are twofold. Following the Snowden revelations, the public is increasingly wary of any legislation that could be perceived as infringing upon personal privacy by providing access to personal data. Even if the access is driven by national security interests, Snowden showed us that such access can and will be abused.
On the IoT front, while my opinions often align with Bruce’s, this is one time that he got it wrong. In a hyper-connected world, there are no borders and U.S. legislation (or legislation from any individual government) cannot solve the problem. Virtually all vulnerable IoT devices are manufactured outside the U.S., and the manufacturers could therefore skirt U.S. laws. Even if they were barred from selling products in the U.S. that failed to meet a higher security standard — assuming that those products would be more expensive to make — the manufacturers would simply produce two tiers of products and sell them at a higher price in the U.S. due to the higher cost of compliance. That’s of little comfort to the U.S. corporation that is under a DDoS attack from webcams running in Thailand.
With a conservative administration coming into office, you can be sure that calls to increase cybersecurity, even at the expense of personal privacy, will increase and make it to the House floor. However, with that same administration being heavily pro-business and the tech sector braced to oppose any legislation that impacts consumer privacy or increases manufacturing costs, all such bills will die on the House floor.
The insurance industry is one that’s ripe for disruption. With data breaches becoming the norm, cyber-insurance has also become the norm for large enterprises. A breach is now like an illness; it’s not if it will happen, but when, and when the day arrives, you don’t want to be hit with out-of-pocket costs that could sink you. Enterprises seek to turn the cost of a data breach into a more predictable expense and mitigate the overall risk. That’s where cyber-insurance comes in.
Insurance companies employ actuaries that are very good at looking at large pools of data and determining the statistical likelihood of a specific event occurring. That then gets wrapped into a policy and everyone is happy. Insurance companies are desperate to get in on the cyber-insurance game, but they have a big challenge: how do they calculate the likelihood of a breach?
Life insurance is easy — plenty of people have lived and died and we have solid data on it. With great accuracy, we can predict when you’re likely to die based on your physical and demographic profile. Data breaches are entirely different. For one, the risk has only existed for a couple of decades at most, so there is limited data. Beyond that, any company can be hacked. When Morgan Stanley, Apple, Google, Microsoft, and many other leaders have all had to fess up to breaches, it demonstrates that no company is immune and no security controls are impenetrable. How do insurance companies deal with this? Today, they’re forced to limit the size and scope of policies to also limit the size of a potential payout, as they simply don’t have confidence in their ability to fully understand risk. In doing so, they’re leaving money on the table and they know it.
A variety of startups have emerged to help fill the void. The early entrants into the space have attempted to generate a FICO score equivalent for cybersecurity, but scoring today is limited to externally visible threats. In order generate true value to insurance companies, risk-scoring algorithms will need to go far deeper and integrate with internal corporate security systems to gain a complete picture of the threat landscape for a given entity. Such a system would benefit provider and consumer alike, allowing insurance companies to provide policies with broader scope and diligent corporations could drive lower premiums by continually demonstrating best-of-breed security controls. With the cyber-insurance industry finally taking off with a 36.6% CAGR (well ahead of the overall cybersecurity industry at <10%), the timing is right for an explosion of startups in this space.
It’s a poorly kept secret that nation states go beyond maintaining a defensive posture in the cybersecurity realm and regularly conduct offensive operations. These attacks are unleashed with varying motivations. Stuxnet, a joint operation between the U.S. and Israel, was conducted for national security purposes in an effort to slow Iran’s nuclear ambitions. Corporate espionage is often a driver, too, as was the case when the U.S. charged five Chinese military officials with hacking U.S. enterprises. Increasingly, motivations for offensive nation state–sponsored attacks have gone into a new realm and have been driven primarily as an effort to undermine the credibility of another government or in some cases influence public sentiment.
The U.S. has been uncharacteristically open in its accusations that state-sponsored Russian hackers actively sought to influence the U.S. presidential election by exposing email conversations via on WikiLeaks and DCLeaks. The Director of National Intelligence went so far as to publicly accuse the Russian Government of the attack on the DNC, and others have openly speculated that they too were behind the compromise of Hillary Clinton Campaign Chairman John Podesta’s inbox. In light of such aggressive and direct meddling in the political affairs of another nation, some in the intelligence community are suggesting that the U.S. should return the favor. This is a troubling notion. If we enter an era in which nations are actively conducting offensive cyberattacks with the primary goal of embarrassing their foes by leaking documents online, many innocent victims will be caught in the crossfire.
It’s one thing to conduct covert cyber-espionage to get a leg up on the competition, either from a military or economic perspective, but it is an entirely different situation when private documents are being handed over to WikiLeaks. Should the corporations that drive a nation’s economy be negatively impacted when their private negotiations with government entities are made public because two nations are in a battle to embarrass one another? What about the private citizens that will also have their private documents and communications revealed? Given the current political tensions, the precedent that has already been set and the aggressive tone of the incoming U.S. administration, I do expect that 2017 will see the U.S. and other nations step into this cyber-mudslinging contest.
The media is dead, long live the media. Journalism hasn’t gone away, but it has undergone a rapid and radical transformation thanks to the Internet and an old guard that has failed to keep up. Individuals no longer wait to sit down with a newspaper to get up to speed, as they have Facebook, Twitter, Instagram, news feeds, and more bombarding them with updates throughout the day. The old guard is crying foul as these real-time mechanisms often don’t permit antiquated concepts such as fact checking. If there’s one thing that technology has taught us it’s that complaining about the pace of change and trying to stagnate it is a waste of time. When it comes to technology, you must lead, follow, or get out of the way. Anything else leaves you in danger of getting run over and traditional media outlets are covered with tread marks.
This new world order has implications for security. Credibility today is measured in Likes, not by the name of the journalist. Convince enough people to post/forward your tweet and it quickly becomes the gospel truth. Social media sites are struggling to strike a balance between being accused of censorship and being platforms for fake news. Why should they care? Fake news isn’t just a nuisance. Social media outlets have become the “source of truth,” as newspapers once were, but they don’t come with an editorial board. Those with ill intent have recognized this and exploited it for gain by propping up worthless stocks, directing victims to online attacks, and, yes, attempting to influence elections. In 2017, major social media outlets will be forced to combat this trend head on and they will implement initiatives to weed out fabricated content. They are advised to tread lightly and focus on crowd-sourcing initiatives as opposed to closed editorial boards, as the truth is not always black and white and just as journalism can be democratized, so too can censorship.
Back in 2013-2014, we saw the era of the financial data breach with the likes of Target, Home Depot, Michael’s, Neiman Marcus, and others all suffering massive thefts of debit/credit card data. The mass retail breaches declined over time due in part to the long overdue rollout of more secure PoS terminal technologies (EMV), but primarily because the spotlight forced retailers to invest in security in order to avoid becoming the latest poster child for security negligence. The attackers were, however, resilient and simply shifted their focus to targeting sources of personally identifiable information (PII), which is more lucrative anyway. Healthcare bore the brunt of the attacks announced in 2015, with Anthem, Premera, and Carefirst all acknowledging that millions of records had been stolen. Financial gain was not always the goal of PII-centric attacks, with the U.S. Office of Personnel Management (OPM) breach likely a nation state-sponsored attack designed to gain information on cleared government workers.
Expect 2017 to signal a third data breach phase, with attackers seeking to alter, not exfiltrate, data. Such attacks raise the stakes, as the damage can be far greater and longer lasting. Stolen data is more likely to ultimately be identified, either because of indicators pointing to the exfiltration or because the stolen data is spotted in the wild. Altered data, on the other hand, can fly under the radar indefinitely, especially if the alterations are subtle. Data is meant to be manipulated and attackers with internal access have the ability to do so, not through anomalous behavior, but by leveraging the very systems designed to alter the data in the first place. Why would attackers want to do this? Imagine attackers conducting corporate espionage and altering data to influence business decisions in a way that affects negotiations with a partner or competitor. How about changes to data used in financial analysis that would lead a trader to conduct a trading pattern in a way that becomes predictable? Most concerning are nation state-sponsored attacks designed to alter political policy.
The concept of the smart home has been promised for more than a decade, but as a handful of platforms such as Apple HomeKit, Google Brillo, Samsung SmartThings, and Wink start to dominate the market and offer consumer-friendly products, it’s finally becoming a reality. Unfortunately, as noted in our predictions about IoT, we haven’t learned from our mistakes, as security has largely been an afterthought for the hardware devices running our homes and offices. With a plethora of vulnerabilities in smartlocks, security cameras, and DVRs, the very products designed to secure our homes and offices, it’s only a matter of time before criminals begin to exploit such flaws on a mass scale.
Burglary-as-a-Service? One thing we know about criminals is that where there is profit to be made, the void will be filled. With home automation security products now a mass-market reality, there’s profit to be made from the flaws that are all too prevalent. Not every smash-and-grab criminal has the know-how to start from scratch and scan the neighborhood for homes with vulnerable locks that can be cracked, or open webcams that can let them know when victims are away. But, would they be willing to swipe a stolen credit card to download a user-friendly tool or query an online database of potential victims? Count on it.
Machine learning and artificial intelligence (AI) are the buzz words du jour in the security industry, and for good reason. In security, we are dealing with mountains of data that are continuing to grow. Historically, the answer has been to shove all that data from disparate vendors into a SIEM, slap on a fancy UI, and make a user-friendly environment for analysts to sift through it. If you’re really sophisticated, you've taken the time to write and maintain some regex scripts to comb through the data and ensure that the interesting bits float to the top to help with prioritization. I speak to hundreds of companies every year and I have yet to speak with a single one that would suggest that it’s comfortable with the coverage it has, given such a system. That’s where machine learning comes in.
While still in its infancy and not without its own hurdles, machine learning will revolutionize the security industry. Humans simply can’t scale, especially when dealing with terabytes of data on a daily basis, but machines don’t get sleepy or demand better snacks in the lunch room, so we’re willing to invest in perfecting the neural networks that drive them. While AI may not be ready to replace humans just yet, a number of startups in the User Entity and Behavior Analytics (UEBA) space, such as Interset, Gurucul, and Exabeam, are proving that the science is mature enough to add value. At the same time, the magic of AI is becoming increasingly accessible to those who can’t afford to hire an army of machine learning experts. Thanks to projects such as Microsoft Azure Machine Learning Studio, Amazon Machine Learning, and Google’s TensorFlow, powerful machine learning platforms — while not quite ready for the masses — are at least available to programmers.
As with any good tool, AI will be used for good and evil. Just as IaaS platforms were quickly adopted by those spreading malware, forcing vendors to put protections in place, so too will the AI platforms become a vehicle for misconduct. As discussed previously, mass quantities of data are being stolen and savvy criminals are looking to monetize it. Stealing networking logs is of limited value, but being able to analyze those logs to identify user behaviors, such as employees more susceptible to social engineering attacks or those with higher access privileges, is very valuable. Say you just stole 18 million records from OPM and you need to sort through it to identify connections and figure out who may be susceptible to extortion: AI is for you.
Ever heard of the Turing Test? It’s a test developed by famed computer scientist Alan Turing, designed to measure the ability of a computer to exhibit intelligent behavior. In a nutshell, passing the Turing Test requires that a human conversing with both another human and a computer would not be able to distinguish between the two. While we’re not there yet, we are getting closer, with companies now actively employing chatbots to converse with humans, especially in customer support. Facebook has even released tools, allowing third parties to create and deploy bots for its Messenger platform. As of November 2016, 30,000 bots had already been deployed on the Messenger platform. When it comes to chatbots, Facebook isn’t the only game in town. Microsoft’s Tay got off to a notoriously rocky start on Twitter, while Line and Kik are also pushing their own bot creation platforms.
As mentioned, bots are great tools for scaling customer interactions. Know what else requires end-user interaction? Social engineering attacks. Historically, social engineering attacks have been static in nature; send a well-crafted phishing email to millions of unsuspecting users, cross your fingers, and wait for the gullible few to respond. It’s not elegant, but it’s effective thanks to the easy scalability and low cost of Internet communication platforms. Chatbots open an entirely new realm for social engineering by engaging a victim in a two-way conversation, building trust, and causing the user to let his or her guard down. Today, chatbots generally allow customer support personnel to be more efficient by handling basic queries and then passing the conversation off to a human being for more in-depth conversations. Expect to see similar methods, when chatbots are abused and employed on fake support sites handling the task of engaging unsuspecting victims, identifying potential targets, and then handing them over to cyber-sweatshops where victims be convinced to install the latest patch (aka backdoor) to cure all that ails them.