As information technology and the internet continue to pervade our lives, the ability of private industry and governments to control information and access details of our private lives also raises a host of ethical issues.
Take the recent controversy surrounding Cambridge Analytica after it targeted specific segments of the population based on data harvested from unsuspecting Facebook users for political purposes. While Facebook CEO Mark Zuckerberg has apologized, the company faced sharp Congressional criticism for moving too slowly to make privacy changes.
Facebook may be the current lightning rod for criticism but there were already growing calls to treat the vast amount of user data with more foresight and consideration. Indeed, as digitization continues apace, ethical challenges will become increasingly important, according to Sean Brooks, a research fellow at the University of California at Berkeley's Center for Long-Term Cyber Security.
"The fact is that the stakes of information security have grown and changed, and that is creating a lot of different types of ethical considerations," he said. "We used to talk about a line between the Internet and real life — that line is essentially gone. The things you want to do online have meaningful impact on your real life."
Increasingly, security practitioners are being forced to wrestle with a host of new issues. For instance, should network defenders attempt to aggressively investigate attackers to gain more information on them, even if it requires taking actions on systems that they do not own? Or should vulnerability researchers out an uncooperative company's vulnerabilities, even if it puts systems at greater risk?
"I think we recognize that there are a lot of pitfalls that threat intelligence researchers can run into," said John O'Keefe, senior corporate counsel and cyber security intelligence legal adviser at Symantec. "Many of the risks are legal, but even more are ethical."
At the recent RSA Security Conference, UC Berkeley's Brooks discussed the potential pitfalls for security professionals of unrestrained use of technology. While citizens and consumer are increasingly cognizant of the dangers, ethical dilemmas pose threats to companies as well. Many companies have been happy to exist in the gray areas and yet-to-be defined ethical landscapes. Part of the problem is that people have guidelines for considering ethical questions, but for companies, a policy is needed and that requires foresight.
"At a functional level, a person can respond to a set of ethical questions, but a company is going to sort of take in a set of stimuli and figure out what is best for it in any given situation," Brooks said. "That policy or process will come in the form of regulation."
Here are some of the most sensitive areas.
1. To disclose or not disclose?
In the late 1990s, the discussion of when and how to disclose vulnerabilities was fairly straightforward, pitting the researcher who found the issue against the company that needed to fix the problem.
However, as technology has become more ingrained in business operations and personal lives, the impact of vulnerabilities has changed dramatically, said UC Berkeley's Brooks. Rather than crashing thousands of systems with a worm, the exploitation of a vulnerability has led to ransomware attacks such as WannaCry and NotPetya that have inflicted major financial losses.
"The discussion over vulnerabilities, breaches and security incidents have changed in tenor, because in the past, it has been more about talking about the state of a company being jeopardized, and now you see a lot more societal impacts as the stakes of incidents are disclosed," Brooks said. "So no longer do companies have to talk about the number of records lost in a breach, but now we are talking about the impact on democracy of a lack of information security."
2. Testing security without permission
The search for vulnerabilities poses its own ethical dilemmas as well. In the past, finding a vulnerability in a production system could have led to legal threats or criminal charges. Both researchers and the companies who are the focus of research have faced ethical questions. Is it appropriate to test a system without permission, even if that system is being used as part of malicious infrastructure? Is it ethical to threaten legal action or pursue a researcher who is performing altruistic vulnerability research?
Even though many companies have given explicit permissions to researchers to test their services through bug bounty programs, the questions continue to be at the center of a debate around security research.
"I've seen cyber security researchers at other companies get close to some legal lines based on the ways they’ve accessed an attacker's infrastructure," Symantec's O'Keefe said.
3. Ethical hacking back
A step beyond unsanctioned penetration and vulnerability testing is hacking back—often euphemistically called active defense—where defenders investigate and pursue attackers to identify them and possibly take further actions against them.
In one case, for example, a potential hacker or researcher was impersonating a Symantec employee to gather information that could be used in spear phishing attacks, according to Charles Kafami, senior threat intel analyst for Symantec DeepSight Adversary Intelligence.
"I had to ask how far I could go to the edge of the envelope to get information on that individual," he said.
Last year, legislators introduced a bill in the U.S. House of Representatives to carve out some exemptions for companies to conduct limited operations against attackers. The idea is fraught with ethical issues, said UC Berkeley's Brooks. What happens, for example, if a company accidentally shuts down a foreign government's systems. The possibility could turn an online conflict between companies into a real-world conflict between nations, he said.
"There is a reasonable argument to add this capability to a private actor's menu of options, but there is an ethical question to what degree do we want to enable private actors," Brooks said. "These are not just conflicts between companies, but conflicts between large institutions, government agencies and other groups."
4. Data collection the right way
Cambridge Analytica highlighted the wrong way to do data collection and targeting. Yet, there are arguments for allowing unauthorized collection of information online, such as scraping websites of housing prices or bank loans to determine whether companies are discriminating against certain customers due to race or class.
In a case brought by the American Civil Liberties Union in federal court, researchers are trying to assert their right to scrape information from websites without permission to study whether online algorithms discriminate. These types of ethical and legal questions, coming so soon after the uproar over Cambridge Analytica, underscores the fact that these issues are not easily solved.
"Researchers need good legal support that is focused on risk and business," said Symantec's O'Keefe. "Whether you are a threat intel company or just the security team at a particular company, you should have legal support."
If you found this information useful, you may also enjoy:
We encourage you to share your thoughts on your favorite social platform.