In: Computer Science
You have been named the network administrator for a large insurance firm. Your responsibility is to come up with a plan for the network infrastructure, which includes number of servers, network security devices, desktops, laptops and handheld devices. The plan should be carefully crafted to show all the necessary details in deploying the infrastructure (LANs, firewalls, routers, switches, IDS and sensors, servers, database servers, and addressing scheme) including specifics about the configurations. An understanding of the security issues the company is facing in order to conduct its business has to be reflected in the security aspects of the infrastructure (you should justify why a security measure is taken and how it will affect the enterprise business). The insurance firm has ten offices (satellite sites) located in ten different cities around United States. There is the headquarters which is located close to two of the other offices. The enterprise network consists of a main site, a backup site and ten satellite sites. All the satellite sites are connected to the main site as individual networks. The satellite sites are dealing with different specialized type of insurances such as house insurance (3 sites), auto insurance (4 sites) and commercial insurance (3 sites). The sites need to have access to the central database of the firm besides their own databases. Each of the sites has roughly one hundred employees except for the headquarters which has 350. Ten percent of the employees are agents who are actively going out in the field for claims or client recruitment. Given the nature of the business, all the employees require internet access. Email services are required as well. The company uses customized software to interface to the databases and has a web interface for the customer to interact with the company. Each side of the business has its own database, however, the management, which is located at the headquarters, must have access to all the information. The Human Resources department has its own database and so does the payroll department. Each of these departments has a staff of 15 people. Your plans should contain diagrams that describe the networks at the satelite sites, at the headquarters and how they are interconnected, including the backup site. The CTO of the firm has also decided to use virtualization and asked you to look into it. You are supposed to give a detailed plan regarding which portions of the network have to be targeted for virtualization and why. You are also supposed to lay off a plan on how to proceed with this technology and show how it would affect the infrastructure that you have previously designed without using virtualization.
Answer:-
Although system virtualization is not a new paradigm, the way in which it is used in modern system architectures provides a powerful platform for system building, the advantages of which have only been realized in recent years, as a result of the rapid deployment of commodity hardware and software systems. In principle, virtualization involves the use of an encapsulating software layer (Hypervisor or Virtual Machine Monitor) which surrounds or underlies an operating system and provides the same inputs, outputs, and behavior that would be expected from an actual physical device. This abstraction means that an ideal Virtual Machine Monitor provides an environment to the software equivalent to the host system, but which is decoupled from the hardware state. Because a virtual machine is not dependent on the state of the physical hardware, multiple virtual machines may be installed on a single set of hardware. The decoupling of physical and logical states gives virtualization inherent security benefits. However, the design, implementation, and deployment of virtualization technology have also opened up novel threats and security issues which, while not particular to system virtualization, take on new forms in relation to it. Reverse engineering becomes easier due to introspection capabilities, as encryption keys, security algorithms, low-level protection, intrusion detection, or antidebugging measures can become more easily compromised. Furthermore, associated technologies such as virtual routing and networking can create challenging issues for security, intrusion control, and associated forensic processes. We explain the security considerations and some associated methodologies by which security breaches can occur, and offer recommendations for how virtualized environments can best be protected. Finally, we offer a set of generalized recommendations that can be applied to achieve secure virtualized implementations.
ave spurred an ever-increasing set of uses. In essence, system virtualization is the use of an encapsulating software layer that surrounds or underlies an operating system and provides the same inputs, outputs, and behavior that would be expected from physical hardware. The software that performs this is called a Hypervisor, or Virtual Machine Monitor (VMM). This abstraction means that an ideal VMM provides an environment to the software that appears equivalent to the host system, but is decoupled from the hardware state2. For system virtualization, these virtual environments are called Virtual Machines (VMs), within (or upon) which operating systems may be installed. Since a VM is not dependent on the state of the physical hardware, multiple VMs may be installed on a single set of hardware
2. Motivations for System Virtualization System virtualization is widely used for a variety of applications, such as, among other things, the consolidation of physical servers [Scott et al. 2010], isolation of guest OSs, and software debugging [Bratus et al. 2008]. There are many other uses to which system virtualization lends itself, and many different motivators for adoption of system virtualization technologies. System virtualization has been attracting a lot of attention, particularly in the last case, because of various technological trends. Some of these trends include increasing commodity operating system complexity, increasing cost of supporting hardware and software systems, and the availability of inexpensive, powerful, and flexible commodity hardware [Ivanov and Gueorguiev 2008; Wlodarz 2007]. A modern commodity OS such as Windows or Linux is very complex (tens of millions of Lines Of Code (LOC) in the latest desktop OSs), and this results in a much larger vulnerability surface than can be easily or provably secured [Franklin et al. 2008a; Seshadri et al. 2007]. Furthermore, OSs add a single point of failure for everything (processes and data) in them. This difficulty in securing a single complicated point of failure represents a security risk for the data and processes on the system. Consequently, with ever-decreasing commodity hardware costs, most organizations achieve
. Implications of Virtualization By removing the dependency of operating systems on a system’s physical state, system virtualization allows multiple operating systems to be installed on a VMM, and thus multiple operating system VMs (called guest operating systems) can be installed on each physical system. Allowing multiple VMs on the same hardware offers many advantages. Near-complete isolation between guest operating systems on the same hardware protects against OSs being a single point of failure. It also allows OS consolidation from different machines as is necessary to reduce system underutilization and maintain efficiency of operation. This abstraction from the hardware state allows not only multiple operating systems to coexist on the same hardware, but for one VMM to run on multiple different networked physical systems concurrently. By utilizing a VMM to mediate between the OS and the hardware, virtualization changes the one-to-one mapping of OSs to hardwa to many-to-many (). Although many real-world systems implement this model only loosely, as a VM does not usually run on multiple systems concurrently, allowing one VMM to be migrated across multiple physical systems seamlessly while running has improved the offerings for high-performance and high-availability systems and cloud computing
System Virtualization The requirements for system virtualization are defined and discussed in detail in Popek and Goldberg’s Formal Requirements for Virtualizable Third-Generation Architectures [Popek and Goldberg 1974]. The requirements and definitions given are still used to define virtualization, and their criteria are used to assess VMMs, although the criteria have become broader (as we discuss when we describe hybrid virtualization strategies later). Virtualization as we describe it in this section is classical virtualization as defined by Popek and Goldberg [1974] and used in Adams and Agesen [2006]. In the later section of this work, we describe methods of virtualization that do not strictly fit these requirements. To explain the requirements for a classically virtualizable CPU architecture, we need to define two properties an instruction must have, and three properties of a virtualized architecture. A more in-depth discussion on virtualizability of CPU architectures can be found in Adams and Agesen [2006].
The x86 architecture is not fully virtualizable [Adams and Agesen 2006; Garfinkel et al. 2007; Rose 2004; Rosenblum and Garfinkel 2005], so various methods have been taken to achieve the widespread virtualization of these systems now in use. The most important of these are paravirtualization, binary translation, and hardware-assisted virtualization [Advanced Micro Devices 2008]. For more in-depth discussion on each of these methods and their relative performances refer to Adams and Agesen [2006]. Binary translation is very similar to emulation, and involves running guest code (both OS and application code) on an interpreter that handles any sensitive instructions correctly. However, this method can have a heavy performance overhead, which optimizations are used to overcome. Examples of optimizations include switching between virtualized and translated instructions depending on the privilege level of the code according to the guest VM, and adaptive binary translation, which changes the code being translated in an effort to improve performance (in some cases outperforming a classical VMM because intercepting instructions causes a context switch which uses a lot of clock cycles on current hardware, a situation which binary translation can minimize [Adams and Agesen 2006]). Paravirtualization involves porting guest operating systems so that they do not use Nonprivileged sensitive instructions, but instead use ones that better cooperate with the VMM [Barham et al. 2003; Rose 2004]. The necessary guest OS modifications have been publicly released for the Linux kernel [Yamahata 2008], and the process has been discussed (but no working port released due to intellectual property reasons) for Windows XP [Barham et al. 2003]. There are, however, various paravirtualized device drivers available for Windows XP that come with some commercial and open-source products. Paravirtualized device drivers are designed to operate using Nonsensitive instructions from the guest OS (in which they are installed), and to interface in a way that minimizes context switching while operating in a way that is transparent to the guest OS.