The insider threat: Cyber security
By Michael Nurse, Katherine Jones, Morgan Lane and Chloe Lawrence-Hartcher
Organisations around the world are breathing a sigh of relief. Linux, the most widely used operating system (OS) in the world, was recently saved from a potentially catastrophic cyber-attack.
In brief
Organisations around the world are breathing a sigh of relief. Linux, the most widely used operating system (OS) in the world, was recently saved from a potentially catastrophic cyber attack.
This is a timely reminder that open source development, (which includes Linux) presents unique risks, particularly when deploying software into your system environment.
Why does this matter?
It is more likely than not that Linux is an OS for something in your life. While the lesser-known OS is only used by approximately 6% of personal computer users, is it relied on (in various forms and distributions) for a vast number of other applications, including over 40% of web servers, 65% of mobile devices and most (if not all of the most powerful) supercomputers.
Household names including Google, NASA, Pixar, Tesla and Amazon all rely on Linux. However, applications are not just limited to tech companies. Sectors including insurance, healthcare, government and financial services all make use of Linux.
What happened
A Microsoft developer discovered a backdoor had been implemented in a Linux utility, that was ubiquitous, and could have resulted in vulnerability to arbitrary code execution on a wide scale on infected distributions of Linux.
It is alleged that the backdoor was implemented over a substantial period through the attacker socially engineering to a position of trust within the Linux distributions development community. The Microsoft developer, Andres Freund, has been hailed as a hero amongst tech leaders and cybersecurity researchers.
Open Source Vulnerabilities
When assessing and managing risks, it is often the case that discussions focus on external threat actors with someone trying to 'brute force' their way into the system.
The risks of a threat actor operating inside an organisation, or worse still integrated into the development cycle of critical software itself is less often discussed, despite the potentially catastrophic consequences of these modes of attack.
Many Linux tools and utilities are developed by the Linux development community. While relationships of trust develop over time, the nature of the development groups mean that it is inevitably vulnerable to motivated threat actors infiltrating those groups.
What is most interesting about this threat, is the patience, social engineering, and pre-meditation that was required to infiltrate the development group as a trusted member. Detection of the threat within the community would have been extremely challenging, and in this instance ineffective. However, the open source nature of the development cycle meant that the hidden threat could be detected (albeit with difficulty).
In a closed development environment, where no, or very limited access to source, and external auditing may be available, the potential for a threat to be buried may be even higher.
Engineers often depend on a network of open code when developing proprietary work. The recently threatened attack on Linux highlights that careful consideration should be given these supply chain threats, whereby code has been contributed to by many sources prior to being integrated into proprietary work.
Given that significant IT infrastructure is run on open source platforms, the potential for deliberately buried vulnerabilities to compromise those systems is a risk that ought be front of mind.
Adversarial AI - Deeply buried threats
We have previously written about backdoor vulnerabilities in the context of the training of AI models, where we suggested that such methods could potentially be used akin to a nihilartikel (an intentional falsity in the work) to provide some notice of copyright infringements (read more about this in our previous article here). Of course, the same training techniques can be deployed to produce vulnerabilities in AI models.
One of the biggest challenges in such AI backdoor threats is the ability (or more likely, the inability) to detect the presence of the threat, in circumstances where at some point the model becomes an impenetrable black box. Detecting a backdoor in a similar fashion to what occurred in the Linux development community, in an AI context, may be unrealistic without extensive audit information about the training process (in particular).
Regulatory and commercial mitigations
Detecting such threats is no easy task. Ultimately it was the transparency of the development community in this instance, as well as arguably some luck in detecting unusual resource usages, that led to uncovering vulnerability.
Transparency regarding the development process, and the ability to audit software development are therefore important components of any mitigation strategy.
The legal frameworks around development environments, and in particular concerning access to information, should therefore be a focus. Some of the considerations that should be included are:
-
the ability to access and audit development code, and the activities of potential threat actors;
-
contractual rights to information (for example trusted third-party suppliers), particularly when deploying bespoke software into your system environment; and
-
clear, robust and documented processes throughout the development process (software commits authorisation etc) and verification of relationships of trust within a development environment.
Even with access to the relevant information, detecting the presence of these types of threats might add prohibitively to the cost of development, or might be ineffective at revealing a complex and cleverly buried threat.
At least in the context of AI, the ability to audit the training of AI models has become a key element of government responses to the emerging AI technologies, in high risk settings, and both Australia and the United States are developing protocols that emphasise transparency and accountability in AI technologies.