In: Computer Science
Although “it is impossible to prepare a response to something that has not been considered in advance”, still we work on resilient engineering as, a system is resilient to the degree to which it rapidly and effectively protects its critical capabilities from disruption caused by adverse events and conditions. RE can provide society and its critical infrastructure with means, methods and technologies to overcome unexampled events with as less harm as possible and to come out even stronger and better prepared afterwards. There are 4 R’s that are critical to the resilience engineering. Q1. There are 4 R’s that are critical to the resilience engineering. Explain them with some examples. Q2. There are 4 Strategies to increase system resilience given them in book. What strategy is most applicable to you as a software programmer and why? Q3. How do you think resilience will be different for technical systems vs sociotechnical systems? Q4. Out of three levels of security I.e. Infrastructure security, Application security and Operational security, we can only deal with application security. Although it is the area we work on, but overall security of even our application depends upon the organizational security policies. Do you agree? Comment using an example. Q5. There are 4 types of security threats namely, Interception, interruption, modification and fabrication. As a software programmer, what threat is the one you may work to avoid and how? Q6. There are 10 Design guidelines for security engineering given in the book. Which you think is the most important and should be used in your everyday software development practices.
Q1. There are 4 R’s that are critical to the resilience engineering. Explain them with some examples.
The four R's are used for Reduce, Reuse, Recycle and Recover.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Q2. There are 4 Strategies to increase system resilience given them in book. What strategy is most applicable to you as a software programmer and why?
Strategies to increase system resilience:
As a software programmer the strategy to reduce the probability of the occurrence of an external event that might trigger system failures is most applicable
The resilience of a system is a judgment of how well that system can maintain the continuity of its critical services in the presence of disruptive events, such as equipment failure and cyberattacks. This view encompasses these three ideas:
Resilience engineering places more emphasis on limiting the number of system failures that arise from external events such as operator errors or cyberattacks. Assumptions:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Q3. How do you think resilience will be different for technical systems vs sociotechnical systems?
Resilience in Sociotechnical Systems.
The importance of a holistic approach to resilience is evident in the ecological and socio-ecological literature. Here we make the case that the same is true of sociotechnical systems. At a low level, technical systems should be predictable, reliable, and robust. For example, a car is designed to perform under a set of environmental conditions—temperature, road surface, and impact force, for example—each of which has a predetermined range of expected values that the car can accommodate. A car is designed to be efficient and cost effective, not to be resilient. However, when a car is combined with a driver, the combined “car-and-driver” system show resilience in the face of unexpected external events. In this combined system, the car contributes the first primary characteristic of resilience, resistance, and the driver contributes the third—an ability to change to accommodate influences. Engineers are generally adept at designing systems that resist or recover in response to influences, whereas designing systems that change to accommodate influences presents the greatest challenge.Some researchers have tried to address the challenge of designing technical systems that can accommodate changing conditions and found it necessary to take a sociotechnical approach.
Some researchers insist that technical systems designers and engineers have a moral obligation to consider the wider social systems that they design for or within.More generally in systems engineering, by expanding the boundaries of the technical systems we consider, most designed or engineered systems either contain or interact with a variety of people, organizations, economies, and other entities that are often best understood on a sociotechnical basis.
The sociotechnical systems that stakeholders must analyze, understand, and improve are often partially designed and partially evolved. This requires stakeholders to grapple with system complexity that they only partly understand, and interpret emergent behavior that was not anticipated.The function and structure of such systems is perspective dependent—in other words, two stakeholders might each view the function and structure of the system from different perspectives. The perspectives that stakeholders adopt can be determined by where they draw the system boundary, what entities they attend to within and beyond that boundary, the details they perceive in those entities and the scales that they are considering (e.g., timescales and spatial scales). We refer to all this as the stakeholders’ “level of abstraction,” as their view of any given part of the system (both its structure and its function) can be more or less abstract depending on a range of different factors, including their domain knowledge and their roles and responsibilities with respect to the system. In sociotechnical systems theory, systems are grouped into three types: primary work systems, for example subsystems of an organization; whole organization systems; and macrosocial systems, such as national institutions. In this study, we use a similar approach to understand the resilience of a sociotechnical system, combining individual stakeholder perspectives on different types of system at different levels of abstraction.
Resilence in technical system.
Resilience is a system's ability to recover from a fault and maintain persistency of service dependability in the face of faults. Resilience engineering, then, starts from accepting the reality that failures happen, and, through engineering, builds a way for the system to continue despite those failures.
Basically, a system is resilient if it continues to carry out its mission in the face of adversity (i.e., if it provides required capabilities despite excessive stresses that can cause disruptions). Being resilient is important because no matter how well a system is engineered, reality will sooner or later conspire to disrupt the system. Residual defects in the software or hardware will eventually cause the system to fail to correctly perform a required function or cause it to fail to meet one or more of its quality requirements (e.g., availability, capacity, interoperability, performance, reliability, robustness, safety, security, and usability). The lack or failure of a safeguard will enable an accident to occur. An unknown or uncorrected security vulnerability will enable an attacker to compromise the system. An external environmental condition (e.g., loss of electrical supply or excessive temperature) will disrupt service.
Due to these inevitable disruptions, availability and reliability by themselves are insufficient, and thus a system must also be resilient. It must resist adversity and provide continuity of service, possibly under a degraded mode of operation, despite disturbances due to adverse events and conditions. It must also recover rapidly from any harm that those disruptions might have caused. As in the old Timex commercial, a resilient system "can take a licking and keep on ticking."
However, system resilience is more complex than the preceding explanation implies. System resilience is not a simple Boolean function (i.e., a system is not merely resilient or not resilient). No system is 100 percent resilient to all adverse events or conditions. Resilience is always a matter of degree. System resilience is typically not measurable on a single ordinal scale. In other words, it might not make sense to say that system A is more resilient than system B.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Q4. Out of three levels of security I.e. Infrastructure security, Application security and Operational security, we can only deal with application security. Although it is the area we work on, but overall security of even our application depends upon the organizational security policies. Do you agree? Comment using an example.
Yes I agree,
An organizational security policy is a set of rules or procedures that is imposed by an organization on its operations to protect its sensitive data.
A key element of any organization's security planning is an effective security policy. A security policy must answer three questions: who can access which resources in what manner?
A security policy is a high-level management document to inform all users of the goals of and constraints on using a system. A policy document is written in broad enough terms that it does not change frequently. The information security policy is the foundation upon which all protection efforts are built. It should be a visible representation of priorities of the entire organization, definitively stating underlying assumptions that drive security activities. The policy should articulate senior management's decisions regarding security as well as asserting management's commitment to security. To be effective, the policy must be understood by everyone as the product of a directive from an authoritative and influential person at the top of the organization.
People sometimes issue other documents, called procedures or guidelines, to define how the policy translates into specific actions and controls. In this section, we examine how to write a useful and effective security policy.
Purpose
Security policies are used for several purposes, including the following:
recognizing sensitive information assets
clarifying security responsibilities
promoting awareness for existing employees
guiding new employees
Audience
A security policy addresses several different audiences with different expectations. That is, each groupusers, owners, and beneficiariesuses the security policy in important but different ways.
Users
Users legitimately expect a certain degree of confidentiality, integrity, and continuous availability in the computing resources provided to them. Although the degree varies with the situation, a security policy should reaffirm a commitment to this requirement for service.
Users also need to know and appreciate what is considered acceptable use of their computers, data, and programs. For users, a security policy should define acceptable use.
Owners
Each piece of computing equipment is owned by someone, and the owner may not be a system user. An owner provides the equipment to users for a purpose, such as to further education, support commerce, or enhance productivity. A security policy should also reflect the expectations and needs of owners.
Beneficiaries
A business has paying customers or clients; they are beneficiaries of the products and services offered by that business. At the same time, the general public may benefit in several ways: as a source of employment or by provision of infrastructure. For example, you may not be a client of BellSouth, but when you place a telephone call from London to Atlanta, you benefit from BellSouth's telecommunications infrastructure. In the same way, the government has customers: the citizens of its country, and "guests" who have visas enabling entry for various purposes and times. A university's customers include its students and faculty; other beneficiaries include the immediate community (which can take advantage of lectures and concerts on campus) and often the world population (enriched by the results of research and service).
To varying degrees, these beneficiaries depend, directly or indirectly, on the existence of or access to computers, their data and programs, and their computational power. For this set of beneficiaries, continuity and integrity of computing are very important. In addition, beneficiaries value confidentiality and correctness of the data involved. Thus, the interests of beneficiaries of a system must be reflected in the system's security policy.
Balance Among All Parties
A security policy must relate to the needs of users, owners, and beneficiaries. Unfortunately, the needs of these groups may conflict. A beneficiary might require immediate access to data, but owners or users might not want to bear the expense or inconvenience of providing access at all hours. Continuous availability may be a goal for users, but that goal is inconsistent with a need to perform preventive or emergency maintenance. Thus, the security policy must balance the priorities of all affected communities.
Contents
A security policy must identify its audiences: the beneficiaries, users, and owners. The policy should describe the nature of each audience and their security goals. Several other sections are required, including the purpose of the computing system, the resources needing protection, and the nature of the protection to be supplied. We discuss each one in turn.
Purpose
The policy should state the purpose of the organization's security functions, reflecting the requirements of beneficiaries, users, and owners. For example, the policy may state that the system will "protect customers' confidentiality or preserve a trust relationship," "ensure continual usability," or "maintain profitability." There are typically three to five goals, such as:
Promote efficient business operation.
Facilitate sharing of information throughout the organization.
Safeguard business and personal information.
Ensure that accurate information is available to support business processes.
Ensure a safe and productive place to work.
Comply with applicable laws and regulations.
The security goals should be related to the overall goal or nature of the organization. It is important that the system's purpose be stated clearly and completely because subsequent sections of the policy will relate back to these goals, making the policy a goal-driven product.
Protected Resources
A risk analysis will have identified the assets that are to be protected. These assets should be listed in the policy, in the sense that the policy lays out which items it addresses. For example, will the policy apply to all computers or only to those on the network? Will it apply to all data or only to client or management data? Will security be provided to all programs or only the ones that interact with customers? If the degree of protection varies from one service, product, or data type to another, the policy should state the differences. For example, data that uniquely identify clients may be protected more carefully than the names of cities in which clients reside.
Nature of the Protection
The asset list tells us what should be protected. The policy should also indicate who should have access to the protected items. It may also indicate how that access will be ensured and how unauthorized people will be denied access. All the mechanisms described in this book are at your disposal in deciding which controls should protect which objects. In particular, the security policy should state what degree of protection should be provided to which kinds of resources.
Characteristics of a Good Security Policy
If a security policy is written poorly, it cannot guide the developers and users in providing appropriate security mechanisms to protect important assets. Certain characteristics make a security policy a good one.
Coverage
A security policy must be comprehensive: It must either apply to or explicitly exclude all possible situations. Furthermore, a security policy may not be updated as each new situation arises, so it must be general enough to apply naturally to new cases that occur as the system is used in unusual or unexpected ways.
Durability
A security policy must grow and adapt well. In large measure, it will survive the system's growth and expansion without change. If written in a flexible way, the existing policy will be applicable to new situations. However, there are times when the policy must change (such as when government regulations mandate new security constraints), so the policy must be changeable when it needs to be.
An important key to durability is keeping the policy free from ties to specific data or protection mechanisms that almost certainly will change. For example, an initial version of a security policy might require a ten-character password for anyone needing access to data on the Sun workstation in room 110. But when that workstation is replaced or moved, the policy's guidance becomes useless. It is preferable to describe assets needing protection in terms of their function and characteristics, rather than in terms of specific implementation. For example, the policy on Sun workstations could be reworded to mandate strong authentication for access to sensitive student grades or customers' proprietary data. Better still, we can separate the elements of the policy, having one policy statement for student grades and another for customers' proprietary data. Similarly, we may want to define one policy that applies to preserving the confidentiality of relationships, and another protecting the use of the system through strong authentication.
Realism
The policy must be realistic. That is, it must be possible to implement the stated security requirements with existing technology. Moreover, the implementation must be beneficial in terms of time, cost, and convenience; the policy should not recommend a control that works but prevents the system or its users from performing their activities and functions. Sidebar 8-7 points out that sometimes the policy writers are seduced by what is fashionable in security at the time of writing. It is important to make economically worthwhile investments in security, just as for any other careful business investment.
Usefulness
An obscure or incomplete security policy will not be implemented properly, if at all. The policy must be written in language that can be read, understood, and followed by anyone who must implement it or is affected by it. For this reason, the policy should be succinct, clear, and direct.
The organizational security policies that are required by the evaluated configuration are as follows:
Enforcing the access rules prevents a user from accessing information that is of higher sensitivity than the user is operating at and prevents a user from causing information to be downgraded to a lower sensitivity.
The method for classification of information is made based on criteria that are defined by the organization. This classification is usually based on relative value to the organization and its interest to limit dissemination of that information. The determination of classification of information is outside the scope of the IT system; the IT system is expected only to enforce the classification rules, not to determine classification. The method for determining clearances is also outside the scope of the IT system and is based on the trust that the organization places in individual users and to some extent on the individual's role within the organization.
Example
Internet Security Policy
The Internet does not have a governing security policy per se, because it is a federation of users. Nevertheless, the Internet Society drafted a security policy for its members [PET91]. The policy contains the following interesting portions.
Users are individually responsible for understanding and respecting the security policies of the systems (computers and networks) they are using. Users are individually accountable for their own behavior.
Users have a responsibility to employ available security mechanisms and procedures for protecting their own data. They also have a responsibility for assisting in the protection of the systems they use.
Computer and network service providers are responsible for maintaining the security of the systems they operate. They are further responsible for notifying users of their security policies and any changes to these policies.
Vendors and system developers are responsible for providing systems which are sound and which embody adequate security controls.
Users, service providers, and hardware and software vendors are responsible for cooperating to provide security.
Technical improvements in Internet security protocols should be sought on a continuing basis. At the same time, personnel developing new protocols, hardware or software for the Internet are expected to include security considerations as part of the design and development process.
These statements clearly state to whom they apply and for what each party is responsible.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Q5. There are 4 types of security threats namely, Interception, interruption, modification and fabrication. As a software programmer, what threat is the one you may work to avoid and how?
A threat to a computing system is a set of circumstances that has the potential to cause loss or harm. To see the difference between a threat and a vulnerability. Here, a wall is holding water back. The water to the left of the wall is a threat to the man on the right of the wall: The water could rise, overflowing onto the man, or it could stay beneath the height of the wall, causing the wall to collapse. So the threat of harm is the potential for the man to get wet, get hurt, or be drowned. For now, the wall is intact, so the threat to the man is unrealized.
However, we can see a small crack in the wall—a vulnerability that threatens the man's security. If the water rises to or beyond the level of the crack, it will exploit the vulnerability and harm the man.
There are many threats to a computer system, including human-initiated and computer-initiated ones. We have all experienced the results of inadvertent human errors, hardware design flaws, and software failures. But natural disasters are threats, too; they can bring a system down when the computer room is flooded or the data center collapses from an earthquake, for example.
A human who exploits a vulnerability perpetrates an attack on the system. An attack can also be launched by another system, as when one system sends an overwhelming set of messages to another, virtually shutting down the second system's ability to function. Unfortunately, we have seen this type of attack frequently, as denial-of-service attacks flood servers with more messages than they can handle.
How do we address these problems? We use a control as a protective measure. That is, a control is an action, device, procedure, or technique that removes or reduces a vulnerability. In Figure 1-1, the man is placing his finger in the hole, controlling the threat of water leaks until he finds a more permanent solution to the problem. In general, we can describe the relationship among threats, controls, and vulnerabilities in this way:
Much of the rest of this book is devoted to describing a variety of controls and understanding the degree to which they enhance a system's security.
To devise controls, we must know as much about threats as possible. We can view any threat as being one of four kinds: interception, interruption, modification, and fabrication. Each threat exploits vulnerabilities of the assets in computing systems.
These four classes of threats interception, interruption, modification, and fabrication describe the kinds of problems we might encounter.
As a software programmer, one threat i may work to avoid would be interception.
Protection Against Interception
To intercept confidential information on the internet, criminals use any means available: viruses, trojans, keyloggers, spyware and malware. Not all antivirus software is able to detect a threat in time. Even more so, an ordinary use, or even an advanced user, can hardly recognize such a threat promptly enough. Meanwhile, data will keep leaking to criminals who can use it as they wish, even against you.
Secure server connection
Server connections use secure connections based on TLS technology. Authorization uses special hashing algorithms. This means your password is not openly stored or transferred via networks.
Encryption key protection
Data encryption keys are protected by a secret phrase that provides an additional level of security. Only if you know the secret phrase can you decrypt the encryption keys and restore data to its original state. The secret phrase is known only to the user and is not stored or transferred anywhere.
Data protection when typing
To keep your password, secret phrase and other data safe from interception when typing, VIPole offers an on-screen keyboard with an additional privacy mode.
Instant messaging security.
Every message is encrypted on the user’s computer before sending it over the network. We only store messages in encrypted form on the server.
Secure voice and video calls
Voice and video calls are put through dedicated secure channels that are opened at the beginning of each call. Data is exchanged via these channels in an encrypted form.
Secure file transfer
Each file is broken down into packets before sending. Each block is encrypted with individual encryption keys and saved in secure storage on the user’s computer. Access to received and sent files is only available via encrypted virtual drives.
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Q6. There are 10 Design guidelines for security engineering given in the book. Which you think is the most important and should be used in your everyday software development practices.
Design guidelines for security engineering
Design guidelines encapsulate good practice in secure systems design. Design guidelines serve two purposes: they raise awareness of security issues in a software engineering team, and they can be used as the basis of a review checklist that is applied during the system validation process. Design guidelines here are applicable during software specification and design.
Base decisions on an explicit security policy
Define a security policy for the organization that sets out the fundamental security requirements that should apply to all organizational systems.
Avoid a single point of failure
Ensure that a security failure can only result when there is more than one failure in security procedures. For example, have password and question-based authentication.
Fail securely
When systems fail, for whatever reason, ensure that sensitive information cannot be accessed by unauthorized users even although normal security procedures are unavailable.
Balance security and usability
Try to avoid security procedures that make the system difficult to use. Sometimes you have to accept weaker security to make the system more usable.
Log user actions
Maintain a log of user actions that can be analyzed to discover who did what. If users know about such a log, they are less likely to behave in an irresponsible way.
Use redundancy and diversity to reduce risk
Keep multiple copies of data and use diverse infrastructure so that an infrastructure vulnerability cannot be the single point of failure.
Specify the format of all system inputs
If input formats are known then you can check that all inputs are within range so that unexpected inputs don't cause problems.
Compartmentalize your assets
Organize the system so that assets are in separate areas and users only have access to the information that they need rather than all system information.
Design for deployment
Design the system to avoid deployment problems
Design for recoverability
Design the system to simplify recoverability after a successful attack.
I think Avoiding a single point of failure is the most important guideline that that should be used in your everyday software development practices.Beacuse a security failure can only result when there is more than one failure in security procedures. For example, have password and question-based authentication.Which may lead to complete failure of the system with weird output.
please don't forget to upvote.
Thankyou