Senior Lecturer in Computing & Communications Engineering Dr Mahdi Aiash describes what Internet shutdowns ordered by repressive regimes entail, and how they can be bypassed
Protesters in Iran last month, where the authorities have cut off mobile internet, WhatsApp and Instagram. Credit: AFP/Getty Images
A report recently published by the UN Human Rights Office highlights the fact that Internet shutdown is increasingly becoming a tool used by governments around the world in the time of crisis to supress protest and hide deadly crackdowns or even military operations against civilians. Most recently, Iranian authorities cut off mobile Internet, WhatsApp, and Instagram amid protests against the killing of Mahsa Amini.
What are Internet Shutdowns and how they happen?
Internet shutdowns are measures taken by governments or entities on behalf of these governments, to intentionally disrupt access to and the use of information and communications systems online. Internet shutdowns exist on a spectrum and include everything from complete blackouts (where online connectivity is fully severed) or disruptions of mobile service to throttling or slowing down connections to selectively blocking certain platforms. Some internet shutdowns last a few days or weeks, while others persist for months or even years.
To explain how this might happen, we need to know that the Internet (as a network) is made up of a number of Internet exchange points (IXPs) which are physical location through which Internet infrastructure companies such as Internet Service Providers (ISPs) connect with each other.
These locations exist on the “edge” of different networks, and allow network providers to share transit outside their own network. Governments might order local internet service providers (ISPs) to fully disconnect online access for a particular geographic region or throughout a country. Unfortunately, ISPs may comply with government orders out of fear of retribution of legal action.
The good news is that if a government does not own and control the whole Internet Infrastructure, it might need to ask another party (IXP providers) to collaborate, which makes it a bit more challenging to have an entire Internet Blackout. Therefore, countries like China, Russia and Iran are also developing individual, “closed-off” internets, which would allow governments to cut off the country from the rest of the world wide web.
Can people bypass the shutdowns?
Depending on the scale of shutdown (and the country), there might be tools and ways to bypass the shutdowns:
Virtual private networks (VPNs): These allow users to access many blocked sites by providing internet service based outside of a censored country using a proxy server. A caveat is that because VPNs are publicly accessible, governments can block them.
Also worth mentioning is that encryption is not enabled by default in all VPN services, and even with encryption enabled, not all your Internet traffic will be encrypted. Domain Name System (DNS) traffic, translating domain names like google.com or mdx.ac.uk to Internet Protocal addresses so browsers can load Internet resources aren’t encrypted, meaning that Internet Service providers (and the government) know what websites you are visiting even if you are using VPN.
The good news is that there is a way to encrypt DNS traffic, by configuring the browser to use DNS over TLS (DOT) or DNS over HTTPs (DoH) protocols.
Another concern related to the use of VPN is the element of trust, since VPN services keep your data.
A good alternative to VPN is serverless tunnels such as Ngrok-tunnel, which is an open source tool that does not tunnel traffic or rely upon third-party servers, meaning governments have a much harder time blocking them.
Deep Packet Inspection circumvention utilities such as GoodbyeDPI or Green Tunnel might be another option to bypass Deep Packet Inspection systems found in many Internet Service Providers which block access to certain websites.
Why this is important?
KeepItOn coalition, which monitors shutdown episodes across the world, documented 931 shutdowns between 2016 and 2021 in 74 countries, with some countries blocking communications repeatedly and over long periods of time. Not only do Internet shutdowns represent violations to human rights and freedom, they also inflict social and economic damage on citizens and limit their abilities to access much-needed services such as hospitals, educational institutions and public transport, which in turn deepens inequality.
Stack of CPU processor and Circuit board / Motherboard. Electronic computer hardware technology. Motherboard digital chip. Tech science. Information engineering component.
Dr Mahdi Aiash, Senior Lecturer and Researcher in Cyber Security, gives his take on the recent discovery of vulnerabilities in Intel processor chips. The security flaws potentially leave millions of computers open to cyber attacks. Dr Aiash explains what this means for computer users, and shares some advice for protecting your devices from this threat.
A large team of researchers discovered two hardware-related security flaws (now known as Meltdown and Spectre) that enable attackers to get privileged access to your system and steal sensitive data, including passwords and banking information from your device. Initially, the flaws have been thought to be only relevant to Intel processor chips. However, Intel has issued a statement indicating that the issue is not specifically a bug in Intel CPUs but rather an exploit that can be applied to all systems with AMD and ARM processors. The issue is related to how programs access memory, specifically information that should only be accessible to the part of the operating system (known as the kernel) that maintains the highest level of privileges. The exploits are ones where malicious programs can access the protected kernel memory space and “see” information that should be locked away.
The exploits in more details
The kernel is the core of the operating system on your device (PC, desktop, mobile phone, etc). It controls the interaction between applications and the file system (the structure that enables you to view and edit files) allowing a program to read and write files. It also manages memory and peripherals, such as your keyboard and your camera. In other words, the kernel can do everything on your device by design. Clearly, you don’t want the kernel to be compromised. Therefore, interactions between users’ – least privileged – processes and the kernel have been made as efficient as possible through various hardware and software optimizations.
Generally speaking, the kernel will reside in a protected part of the memory, while users’ processes and applications are stored in different parts of the memory. Operating systems use a data structure known as “Page Table” to identify and access processes from different parts of the memory. Any attempt of the users’ processes to access (read, write) the kernel part of the memory should be denied by the operating system. Unfortunately, the current attacks exploit a design flaw which enables users’ programs with low privileges to access protected kernel memory if represented by the same page table. If an attacker can find a way to install a normal program on your computer, they could then be able to read passwords stored in the kernel memory, private encryption keys, files cached from the hard drive and more!
How about multi-user systems?
All modern operating systems provide multi-user environments. One of the most basic premises of computer security in this is isolation. If you run somebody else’s sketchy code as an untrusted process on your machine, you should restrict it to its own tightly sealed playpen. Otherwise, it might peer into other processes, or snoop around the computer as a whole. The newly discovered attack breaks some of the most fundamental protections computers promise. These attacks could enable users to access the information of other users sharing the same memory.
Are these attacks relevant in the Cloud?
Multi-tenancy is the key common attribute of both public and private clouds, and it applies to all three layers of a cloud: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). A tenant is any application that needs its own secure and exclusive virtual computing environment. This environment can encompass all or some select layers of enterprise architecture, from storage to user interface. All interactive applications (or tenants) have to be multiuser in nature.
Considering the multi-tenant nature of Cloud, these attacks might seem a disaster to Cloud providers. Well, this is not quite true; as one of the major technologies to facilitate Cloud, virtualisation could potentially enable Cloud providers to create isolated virtual instances of their resources for different users and to potentially mitigate the new attacks.
Recall that the Meltdown bug enables reading memory from address space represented by the same page table. Without getting into the details of address translation inside the operating systems and the mechanisms of virtualisations, you just need to understand that virtualisation in the Cloud comes (mainly) in two types:
Full-Virtualisation: In this type a virtual machine (VM) is created. Inside the VM, both the OS (known as guest OS) and hardware are virtualised by the host operating system. This means that the guest can issue commands to what it thinks is actual hardware, but really are just simulated hardware devices created by the host.
Containerisation: also called container-based virtualisation and application containerisation — is an OS-level virtualisation method for deploying and running distributed applications without launching an entire VM for each application. Instead, multiple isolated systems, called containers, are run on a single control host and access a single kernel.
The nature of the new exploits means that different customer VMs on the same fully-virtualised hypervisor cannot access each other’s’ data because these VMs do not share the same page table. But different users on the same guest instance can access each other’s data (they share the same page table of the guest OS). This latter part holds true for non-virtualised hardware as well: users under the same OS kernel can access each other’s’ data. A quick solution to this flaw would be using a virtual page table between virtual tables of different VMs. Such a solution could be deployed in fully-virtualised platforms. Therefore, fully virtualised technologies are not affected in the sense that guests cannot access host (hypervisor) memory while container-based technologies are affected by Meltdown across container boundaries.
What can you do to protect your device?
In these types of processor-level flaw, your best bet, as usual, is to keep your PC updated with any new drivers that become available. Microsoft’s fix was released late on January 3. You can likely see it if you check Windows Update.
Apple issued a new support document highlighting how the recently unearthed chip vulnerabilities involving Intel, ARM, and AMD processors impacts nearly the entirety of Apple’s product line. “All Mac systems and iOS devices are affected,” the support document reads, “but there are no known exploits impacting customers at this time. Since exploiting many of these issues requires a malicious app to be loaded on your Mac or iOS device, we recommend downloading software only from trusted sources such as the App Store.” With respect to the Spectre vulnerability, which Apple notes is “extremely difficult to exploit,” Apple says that iOS and Mac users can expect a patch relatively soon.
Google says that Android smartphones and tablets that have the latest security updates are protected from the flaws. To check for available updates, go to Settings, System and System Update. Unfortunately, a significant portion of Android users are stuck on older, unsupported versions of the operating system, and could therefore remain vulnerable. Google, however, has moved to reassure concerned users by saying, “On the Android platform, exploitation has been shown to be difficult and limited on the majority of Android devices.”
ARM said that patches had already been shared with the companies’ partners.
AMD said it believes there “is near zero risk to AMD products at this time.”
The tip of an iceberg
The bad news is that the Kernel Page Table Isolation fix (potentially using a virtual page table) makes everything run slower on Intel x86 processors. So if your computer appears slower than it should be, it’s because it is. Furthermore, Microsoft’s testing revealed a “small number” of antivirus programs are making unsupported calls into Windows kernel memory, which result in blue screen of death (BSOD) errors. To avoid causing widespread BSOD problems Microsoft opted to only push its January 3 security updates to devices running antivirus from firms that have confirmed their software is compatible. “If you have not been offered the security update, you may be running incompatible antivirus software and you should follow up with your software vendor,” the company explains. “Microsoft has been working closely with antivirus software partners to ensure all customers receive the January Windows security updates as soon as possible.” Unlike recent cyber incidents, these attacks exploit a processor-level flaw which makes it more challenging for software security solutions such as antivirus to discover them. Also, ironically, such attacks at the lowest components of the computing systems could have a devastating impact on the latest technologies such as the Cloud, Smart Cities and Internet of Things. These two observations highlight the fact that there is a wider attack surface than traditional security solutions could potentially cover and stress the need for a more comprehensive “Defence in depth” approach for providing security.
The collision of two black holes holes—a tremendously powerful event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO—is seen in this still from a computer simulation. Photo by the SXS (Simulating eXtreme Spacetimes) Project.
Dr Giuseppe Primiero (pictured right), Senior Lecturer in Computing Science and a member of the Foundations of Computing research group at Middlesex University, and Professor Viola Schaffonati, of the Politecnico di Milano, Italy, are working on a philosophical analysis of the methodological aspects of computer science.
In February 2016 science hit the news again: the merger of a binary black hole system was detected by the Advanced LIGO twin instruments, one in Hanford, Washington, and the other 3,000 km away in Livingstone, Louisiana, USA. The signal, detected in September 2015, was famously predicted by Einstein’s general theory of relativity. This phenomenon was also numerically modelled by super-computers since at least 2005 – a typical example of computational experiment.
Computational experiments
The term ‘computational experiment’ is used to refer to a computer simulation of a real scientific experiment. An easier example: to test some macroscopic property of a liquid which is hard to obtain, or where equipment is too expensive to purchase e.g. in an educational setting, a simulation is a more feasible solution than the real experiment. Computational experiments are largely used in several disciplines like chemistry, biology and the social sciences. As experiments are the essence of scientific methodology, indirectly, computer simulations raise interesting questions: how do computational experiments affect results in the other sciences? And what kind of scientific method do computational experiments support?
These questions highlight the much older problem of the status and methodology of computer science (CS) itself. Today we are acquainted with CS as a well-established discipline. Given the pervasiveness of computational artefacts in everyday life, we can even consider computing a major actor in academic, scientific and social contexts. But the status enjoyed today by CS has not always been granted. CS, since its early days, has been a minor god. At the beginning computers were instruments for the ‘real sciences’: physics, mathematics, astronomy needed to perform calculations that had reached levels of complexity unfeasible for human agents.
Computers were also instruments for social and political aims: the US army used them to compute ballistic tables and, notoriously, mechanical and semi-computational methods were at work in solving cryptographic codes during the Second World War.
The UK and the US were pioneers in the transformation that brought CS into the higher education system: the first degree in CS was established at the University of Cambridge Computer Laboratory in 1953 by the mathematics faculty to meet the request of competencies in mechanical computation applied to scientific research. It was followed by Purdue University in 1962. The academic birth of CS is thus the result of creating technical support for other sciences, rather than the acknowledgement of a new science. Subsequent decades brought forth a quest for the scientific status of this discipline. The role of computer experiments as they are used to support results in other sciences, a topic which has been largely investigated, seems to perpetrate this ancillary role of computing.
The collision of two black holes holes – a tremendously powerful event detected for the first time ever by the Laser Interferometer Gravitational-Wave Observatory, or LIGO – is seen in this still from a computer simulation. Photo by the Simulating eXtreme Spacetimes Project.
A science?
But what is then the scientific value of computational experiments? Can they be used to assert that computing is a scientific discipline on its own? The natural sciences have a codified investigation method: a problem is identified; a predictable and testable hypothesis is formulated; a study to test the hypothesis is devised; analyses are performed and results of the test are evaluated; on their basis, the hypothesis and the tests are modified and repeated; finally, a theory that answers positively or negatively to the hypothesis is formulated. One important consideration is therefore the applicability of the so-called hypothetical-deductive method to CS. This, in turn, hides several smaller issues.
The first concerns the qualification of which ‘computational problems’ would fit such method. Intuitively, when one refers to the use of computational techniques to address some scientific problem, the latter can come from a variety of backgrounds. We might be interested in computing the value of some equations to test the stability of a bridge. Or we might be interested in knowing the best-fit curve for the increase of some disease, economic behaviour or demographic factor in a given social group. Or we might be interested in investigating a biological entity. These cases highlight the old role of computing as a technique to facilitate and speed-up the process of extracting data and possibly suggest correlations within a well-specified scientific context: computational physics, chemistry, econometrics, biology.
An essential characteristic of scientific experiments is their repeatability.
But besides the understanding of ‘computational experiment’ as the computational study of a non-computational phenomenon, the computational sciences themselves offer problems that can be addressed computationally: how stable is your internet connection? How safe is your installation process when external libraries are required? How consistent are the data extracted from some sample? Just to outline some. These problems (or their formal models) are investigated through computational experiments, but they seem to be less easily identified with scientific problems.
The second: how to formulate a good hypothesis for a computational experiment? Scientific hypotheses depend on the system of reference and, in the case of their translation to a computational setting, we have to be careful that the relevant properties of the system under observation are preserved. An additional complication is presented when the observation itself concerns a computational system, which might include a formal system, a piece of software, or implemented artefacts. Each of the levels of abstraction pertaining to computing reveals a specific understanding of the system, and they can all be taken as essential in the definition of a computing system. Is then a hypothesis on such systems admissible if formulated at only one such level of abstraction e.g. considering a piece of code but not its running instances? And is such an hypothesis still well-formulated enough if it tries instead to account for all the different aspects that a computational system present?
Finally, an essential characteristic of scientific experiments is their repeatability. In computing, this criterion can be understood and interpreted differently: should an experiment be repeatable under exactly the same circumstances for exactly the same computational system? Should it be repeatable for a whole class of systems of the same type? How do we characterize typability in the case of software? and how in the case of hardware?
Irregularities
All the above questions underpin our understanding of what a computational experiment is. Although we are used to expecting some scientific uniformity in the notion of experiment, the case of CS evades such strict criteria. First of all, several sub-disciplines categorise experiments in very specific ways, each not easily applicable by the research group next-door: testing a piece of software for requirements satisfaction is essentially very different from testing a robotic arm for identifying its own positioning.
Experiments in the computational domain do not offer the same regularities that can be observed in the physical, biological and even social sciences. The notion of experiments is often confounded with the more basic and domain-related activity of performing tests. For example, model-based testing is a well-defined formal and theoretical method that differs from computer-simulations in both admissible techniques, recognised methodology, assumptions and verifiability of results. Accordingly, the process of checking an hypothesis that characterises the scientific method described above is often intended simply as testing or checking some functionality of the system at hand, while in other cases it implies a much stronger theoretical meaning. Here the notion of repeatability (of an experiment) merges with the replicability (of an artefact) – a distinction that has already appeared in the literature (Drummond).
Finally, benchmarking is understood as an objective performance evaluation of computer systems under controlled conditions: is it in some sense characterising the quality of computational experiments, or simply identifying the computational artefacts that can be validly subject to experimental practices?
A philosophical analysis
The philosophical analysis on the methodological aspects of CS, of which the above is an example, is a growing research area. The set of research questions that need to be approached is large and diversified. Among these, the analysis on the role of computational experiments in the sciences is not a new one, though less understood is the methodological role of computer simulations in CS, rather than as a support method for testing hypotheses in other sciences.
The Department of Computer Science at Middlesex University is leading both research and teaching activities in this area, in collaboration with several European partners, including the Dipartimento di Elettronica, Informazione e Bioingeneria at Politecnico di Milano in Italy, which offers similar activities and has a partnership with Middlesex through the Erasmus+ network.
In an intense one-week visit, we drafted initial research questions and planned future activities. The following questions represent a starting point for our analysis:
Do experiments on computational artefacts (e.g. a simulation of a piece of software) differ in any significant way from experiments performed on engineering artefacts (like a bridge), social (a migration) or physical phenomena (fluid dynamics)?
Does the nature of computational artefacts influence the definition of a computational experiment? Or in other words, is running an experiment on a computer significantly different than running it in a possibly smaller-scale but real-world scenario?
Does the way in which a computational experiment is implemented influence the validity and generality of its results? In which way does the coding, its language and choice of algorithms affect the results?
These questions require considering the different types of computer simulations, as well as other types of computational experiments, along with the specificities of the problems treated. For example, an agent-based simulation of a messaging system underlies problems and offers results that are inherently different from the testing with real users of a monitoring systems for privacy on social networks. The philosophical analysis on the methodological aspects of CS impacts not only the discussion about the discipline, but also on how its disciplinary status is acknowledged by a larger audience.
Nowadays we are getting used to reading about the role of computational experiments in scientific research and how computer-based results affect the progress of science. It is about time that we become clear about their underlying methodology, so that we might say with some degree of confidence what their real meaning is.