We cannot live without our computing and communications infrastructure. In addition to banking and ecommerce systems, the critical infrastructure includes such vital elements as the nation’s power grid and air traffic control systems.
Given our reliance on this technology, we also cannot live with the impact that cyberattacks are having on these systems. The Love Bug virus that struck four years ago, for example, infected nearly 4 million computers and cost $15 billion to clean up. To stop these attacks, we must develop a next-generation information technology infrastructure, as well as laws and policies that are informed by technology, and a broad-based strategy that develops a trained work force and creates cyberawareness at all levels of the general population.
Part of the problem has been an explosion of complexity. In the 1980s, there were about 5 million computers worldwide. Now, there are more than 800 million connected to the Internet. In the ’80s, we had limited connectivity; today, we are highly connected. In the ’80s, most users were technically savvy; today, they’re not.
My hypothesis is that software vulnerabilities, or bugs, were more prevalent in the ’80s than they are today. Yet the security threat was limited than compared with today, when the attacks are becoming more frequent and complex on a daily basis. Another key difference: In the ’80s one needed a PhD to understand the inner workings of an operating system and to exploit its bugs. Today, it is possible for a high school student to mount an attack by just downloading automated scripts from the Internet.
While we have made a start in responding to our vulnerability, I believe we are nowhere near solving the problem. Of course, we will never be able to do that because the notion of 100 percent secure systems is a utopian one; it can never be achieved. But while we can never hope to stop attacks completely, we can attempt to stay one step ahead of the enemy.
The current method of securing our networks and computers is best characterized as “security through patches,” which involves users downloading security patches and updates from Web sites of software and tools vendors. This approach is very costly for enterprises with complex IT environments, and it addresses security at the component level instead of the system level. Further, when attacks do take place, often the whole system dies and this results in significant economic losses.
We need a holistic approach to system survivability and security. Under this approach, we would focus on developing technologies for systems that do not die under an attack and those that continue to operate through attacks.
Our IT infrastructure will not become resilient to attacks without a significant investment. But neither government nor industry alone will be able to provide the required level of investments in research and development and in physical deployment. We will need a partnership that involves industry, government and academia to help define and execute a research agenda for protecting the global information infrastructure. This partnership will help create a new era of sustainable computing and communications systems that are measurable, available, secure and trustworthy, or MAST. To rapidly share the innovation with industry, we will need to be forward-looking and develop a strategy that will allow industry partners to rapidly utilize new intellectual property and create products and services.
Achieving the goal of developing MAST computing and communications systems demands a new way of doing research, requiring computer scientists, engineers, business and policy researchers to interact. In the security area, technology, policy and business issues are greatly intertwined. A well-defined research agenda would address problems and develop technologies across several dimensions, such as next-generation prediction and response, and resilient and self-healing networks and computing. Right now, when a system is knocked out, packets of data are lost because routers on the receiving end don’t have the intelligence to learn what was in the process of being sent to them. That can and should be fixed.
We also need technologies that offer secure access to devices and spaces; we need technologies for software measurement that can effectively reduce the number of bugs; we need technologies for software assurance that will help by detecting malicious code; and we need technologies that guarantee security without compromising privacy. The last issue is particularly important because many people fear that if we want to be secure then we must give up some rights and our personal privacy.
In addition to technology development, we also need to develop tools for business risk analysis and for determining the economic implications and return on investments in information technology. Such an effort should involve industry, which owns the necessary information, and should be spearheaded by a neutral and trusted third party (or a consortium) and involve academics who can perform unbiased research. Security is often considered just “overhead,” but if one were able to quantify the economic impact and the ROI for information technology, there would be a lot more justification for building in security in products, services and systems from the very beginning, rather than adding them in later.
Developing and deploying the right technologies will clearly result in better law enforcement. During the past two decades, few people have been caught for attacking our infrastructure and fewer still have been convicted. The main reason has been that we cannot pinpoint the machines that originate the attack and, even if we could, we are unable to identify the user with certainty. But imagine that we had the technology to trace packets to the original source. Also imagine that we had strong biometrics-based user authentication, which was 99.99 percent accurate, on all computers and that every user was being authenticated all the time in real time. In this scenario, it would be possible not only to trace an attack to a specific machine but also to pinpoint the user at the time of the attack.
Implementing this would require that our whole networking infrastructure be overhauled and that all sorts of information be tagged, stored and analyzed, which would create serious concerns with our privacy. It will also require the government to make strong biometrics-based authentication technologies mandatory on all computers, a policy that will obviously spark controversy.
Biometrics technologies will become mainstream because the cost of sensors (such as cameras) is going down and many laptops already are equipped with them. Several other biometrics technologies also are in the commercial market and as their performance improves, these may become the way we are allowed access to our computing environment.
Lastly, we should keep in mind that the number of users of the IT infrastructure who are not tech savvy is increasing and so is the complexity of working with these systems. We should invest more in cyberawareness for citizens, equipping them with knowledge about threats and ways to protect their computer systems, their personal data and their privacy. This should be accomplished through a partnership of academia, industry, government, the K-12 school system and community-based organizations.
Increasing cyberawareness will not solve the cybersecurity problem, but it will certainly make a dent in the ever-increasing velocity of propagation of such attacks€¦quot;worms and viruses can propagate to millions of computers in minutes. This is the least we can do while we are developing the next generation of technologies.
Pradeep K. Khosla is founder of the Carnegie Mellon CyLab and dean of engineering at Carnegie Mellon University.