Question

In: Computer Science

Describe the OSI Model? Describe the TCP/IP (Internet Model)? How do you determine the cost of...

Describe the OSI Model?

Describe the TCP/IP (Internet Model)?

How do you determine the cost of a system?

What are some of the application areas where a LAN can be an effective tool?

What are some of the features offered by Windows 2003/2008?

What are the advantages and disadvantages of ATM?

What are the advantages of frame relay?

What are the functions performed by a network operating system?

What are the SDLC phases?

Solutions

Expert Solution

1) OSI model

  • OSI stands for Open System Interconnection is a reference model that describes how information from a software application in one computer moves through a physical medium to the software application in another computer.
  • OSI consists of seven layers, and each layer performs a particular network function.
  • OSI model was developed by the International Organization for Standardization (ISO) in 1984, and it is now considered as an architectural model for the inter-computer communications.
  • OSI model divides the whole task into seven smaller and manageable tasks. Each layer is assigned a particular task.
  • Each layer is self-contained, so that task assigned to each layer can be performed independently.

2) TCP/IP

TCP/IP, or the Transmission Control Protocol/Internet Protocol, is a suite of communication protocols used to interconnect network devices on the internet. TCP/IP can also be used as a communications protocol in a private network (an intranet or an extranet).

The entire internet protocol suite -- a set of rules and procedures -- is commonly referred to as TCP/IP, though others are included in the suite.

TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that identify how it should be broken into packets, addressed, transmitted, routed and received at the destination. TCP/IP requires little central management, and it is designed to make networks reliable, with the ability to recover automatically from the failure of any device on the network.

The two main protocols in the internet protocol suite serve specific functions. TCP defines how applications can create channels of communication across a network. It also manages how a message is assembled into smaller packets before they are then transmitted over the internet and reassembled in the right order at the destination address.

IP defines how to address and route each packet to make sure it reaches the right destination. Each gateway computer on the network checks this IP address to determine where to forward the message.

TCP/IP model layers

TCP/IP functionality is divided into four layers, each of which include specific protocols.

  • The application layer provides applications with standardized data exchange. Its protocols include the Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Post Office Protocol 3 (POP3), Simple Mail Transfer Protocol (SMTP) and Simple Network Management Protocol (SNMP).
  • The transport layer is responsible for maintaining end-to-end communications across the network. TCP handles communications between hosts and provides flow control, multiplexing and reliability. The transport protocols include TCP and User Datagram Protocol (UDP), which is sometimes used instead of TCP for special purposes.
  • The network layer, also called the internet layer, deals with packets and connects independent networks to transport the packets across network boundaries. The network layer protocols are the IP and the Internet Control Message Protocol (ICMP), which is used for error reporting.
  • The physical layer consists of protocols that operate only on a link -- the network component that interconnects nodes or hosts in the network. The protocols in this layer include Ethernet for local area networks (LANs) and the Address Resolution Protocol (ARP).

3) How do you determine the cost of a system?

  • Involve Business Stakeholders in the Software Estimation Process

Involving stakeholders early in the software estimation process helps to define more accurately what is important in the software development cycle. This helps both the business leaders and the technology team gain a shared understanding of the project. It also helps to hold everyone involved accountable to the initial estimate.

Years ago I worked with a client who demanded a certain system functionality. Several people told the client that an alternative approach was better. The client refused to listen. What was missing was that no one explained the costs to the client. But once the costs for the function were understood, the client’s demands changed and the team could arrive at a reasonable solution. The give and take of estimating will help to drive more realistic expectations from the beginning.

  • Ask, “Why Do Most Software Estimation Projects Fail?”

The answer is, poor requirements and weak leadership. The technology in itself is rarely the cause of project failure. Great technology and developers will fail without good requirements and leadership. Beware of this red flag: leadership that cannot explain the end goals or the business drivers behind software during estimating sessions with the project team. Developers can’t develop quality code when requirements are not clear. When developers run into roadblocks or have questions, leaders — especially the product owner — need to make quick and informed decisions. Otherwise they put their project timeline and budget at stake. When teams lack clear guidance on requirements and/or if leadership is not available to answer questions or clear roadblocks, any software estimation must include contingency plans in order to accommodate the uncertainty in the software development cycle. It’s better to uncover these issues during the estimation process than to allow them to derail progress down the road.

  • Break the Requirements Down to Increase Transparency in Software Estimation

Begin by breaking the requirements down far enough so that each requirement can be built in a short time by a single developer. Any requirement that cannot be broken down may not be understood well enough for accurate estimation (with some exceptions).

This process will help the stakeholders understand what it will take to develop the software. Demanding more details upfront may seem to add time to project estimation, but the transparency it creates usually shortens the software estimation, improves quality and shortens the approval time through better understanding amongst stakeholders. Breaking the project down will also allow for the team to set milestones and measure against them during the project, contributing to project success.

  • Tie the Estimate to Reality

Determine what you are going to measure against. This is a significant challenge. Find a piece of the project that everyone can agree is well defined and can be estimated to (ideally) about half of a day in development time. By establishing this baseline, it becomes easier to estimate the other work against this baseline. This will also allow the team to quickly re-estimate the work once the development starts and the team has worked through a portion of the project.

  • Build the Right Team
  • There’s more to a team than just developers. They need a good supporting cast and good requirements that hold everyone accountable. In order for the project to provide business value, users need to be able to use the application effectively. This means that the project must have good business analysts to write good requirements, which drive efficient development. The user experience is more than just how it looks. Include a designer on the project who can provide both an effective interface that is pleasant but also facilitates meaningful user flow.

An area too often overlooked or skimped on when building a team is the resources necessary to build an effective QA testing plan. The team should be prepared to test early and often with a thorough and repeatable process to identify problems with the application while the code is fresh in the developer’s mind. The plan needs to ensure that the delivered code fulfills the requirements.

An approach that I have found to work in building the team is to use ratios of these roles to developers. A single BA can support only so many developers before the developers are coding faster than the BA can write requirements (an all-too-common problem that wastes developer dollars). A lack of good requirements wastes developers’ resources. This is also true of user experience and quality assurance. Developers who have a good supporting cast are better equipped to create good code that meets requirements, budget and scheduling needs.

  • Remember Why the Product Owner Matters

This is the single most important person on the project. The empowered product owner can focus on the project and make the important decisions. The product owner drives the requirements, arbitrates differences between the business and technology and prioritizes the work to allow the team to deliver business value as quickly as possible.

An overwhelmed product owner who splits time between too many duties imperils the project. This is another area where, if the product owner does not have clear authority and time to dedicate to the project, red flags should wave as expensive contingency plans add to estimated costs.

  • Good Software Estimation Metrics Should Reveal Problems Sooner

If the estimate is based on the developer’s velocity, then it is easier to determine whether a team is developing at the expected pace. Metrics allow you to identify team members that are not producing as expected. Rarely do all teams or developers produce at exactly the same speed. Some will work faster, some more slowly. If they are too slow, you can begin to investigate where friction occurs, from requirements to design, architecture, scope creep or lacking technical skills. Comparing actual velocity to original estimates allows stakeholders to identify budget misalignment sooner and take corrective action when needed. It is far better to identify a problem early and have time to fix it than to explain why you missed your goals after the fact.

  • Small Things Matter

Once you have estimated the requirements and assigned them to developers and their supporting cast, there are still details that can affect the estimate.

When will the first release be ready?
How long after a developer joins a project will it take for them to be 100% productive?
How much development can a lead developer do when they are leading a team of two developers vs. a team of seven?
How will holidays and vacations affect the project?
If the project must be completed by a certain date, how many developers will it take?
What will it cost to develop a feature?

4) What are some of the application areas where a LAN can be an effective tool?

LAN- Local Area Network.

It uses for limited geographical area. I think you are asking for applications of LAN. LAN is a term which describe the scalability of network. scalability of LAN is from Home network to enterprise level.

Typically LAN based on private IP address.

Application/uses of LAN

  • Home network (Wi-fi)
  • Within in a Shops
  • Within in a Offices
  • Within in a Enterprises
  • Within in a University

5) What are some of the features offered by Windows 2003/2008?

1. Virtualization
Although it will not be available with the initial launch of Server 2008, Microsoft's Hyper-V hypervisor-based virtualization technology promises to be a star attraction of Server 2008 for many organisations.

Although some 75 percent of large businesses have started using virtualization, only an estimated 10 percent of servers out are running virtual machines. This means the market is still immature. For Windows shops, virtualization using Server 2008 will be a relatively low-cost and low-risk way to dip a toe in the water.

At the moment, Hyper-V lacks the virtualized infrastructure support virtualization market leader VMware can provide. Roy Illsley, senior research analyst at U.K.-based Butler Group, noted that Microsoft is not as far behind as many people seem to think, however. "Don't forget Microsoft's System Center, which is a fully integrated management suite and which includes VM Manager. Obviously it only works in a Wintel environment, but if you have Server 2008 and System Center, you have a pretty compelling proposition.

"What Microsoft is doing by embedding virtualization technology in Server 2008 is a bit like embedding Internet Explorer into Windows," said Illsley. "This is an obvious attempt to get a foothold into the virtualization market."

At launch, Microsoft is unlikely to have a similar product to VMware's highly popular VMotion (which enables administrators to move virtual machines from one physical server to another while they are running), but such a product is bound to available soon after.

2. Server Core
Many server administrators, especially those used to working in a Linux environment, instinctively dislike having to install a large, feature-packed operating system to run a particular specialized server. Server 2008 offers a Server Core installation, which provides the minimum installation required to carry out a specific server role, such as for a DHCP, DNS or print server. From a security standpoint, this is attractive. Fewer applications and services on the sever make for a smaller attack surface. In theory, there should also be less maintenance and management with fewer patches to install, and the whole server could take up as little as 3Gb of disk space according to Microsoft. This comes at a price — there's no upgrade path back to a "normal" version of Server 2008 short of a reinstall. In fact there is no GUI at all — everything is done from the command line.

3. IIS
IIS 7, the Web server bundled with Server 2008, is a big upgrade from the previous version. "There are significant changes in terms of security and the overall implementation which make this version very attractive," said Barb Goldworm, president and chief analyst at Boulder, Colorado-based Focus Consulting. One new feature getting a lot of attention is the ability to delegate administration of servers (and sites) to site admins while restricting their privileges.

6) What are the advantages and disadvantages of ATM?

I am assuming ATM- asynchronous transfer mode

Following are the benefits or advantages of Asynchronous Transfer Mode (ATM):
➨It is optimized to transport voice, data and video i.e. single network for everything. It is used for mixed traffic, real-time and non real time traffic types.
➨It is easy to integrate with LAN, MAN and WAN network types i.e. seamless integration.
➨It is QoS oriented and high speed oriented.
➨It enables efficient use of network resources using bandwidth on demand concept.
➨It uses simplified network infrastucture.

Following are the disadvantages of Asynchronous Transfer Mode (ATM):
➨Overhead of cell header (5 bytes per cell)
➨Complex mechanisms are used to achieve QoS.
➨Congestion may cause cell losses
➨ATM switch is very expensive compare to LAN hardware. Moreover ATM NIC is more expensive compare to ethernet NIC.
➨As ATM is connection oriented technology, the time required for connection setup and tear down is larger compare to time required to use it.

7) What are the advantages of frame relay?

Following are the benefits or advantages of Frame Relay:
➨It offers higher speeds. This is because of no error detection is incorporated and hence overhead is less. It offers high throughput compare to X.25.
➨The bandwidth can be allocated dynamically as per need.
➨The network overhead is less due to incorporation of congestion control mechanism.
➨It allows bursty data which do not have fixed data rate.
➨It operates at layer-1 (physical) and layer-2 (datalink). Hence it is easy to integrate with devices having layer-3 (i.e. network) layer functionalities.

8)  What are the functions performed by a network operating system?

(i) Ensuring security of data, information as well as hardware components through provision of passwords and access privileges.

(ii) Provide program testing routines.

(iii) Passing of control from one job (program) to another under a system priority when more than one application program occupies the main storage.

(iv) Memory allocation and loading of programs.

(v) Furnishing a complete record of all that happens during processing.

(vi) Reporting of errors which occur during execution in the network.

(vii) Controlling the input and output devices as well as server/ client machines in the network.

(viii) Scheduling and loading of programs or subprograms in order to provide a continuous sequence of processing.

(ix) Providing quick means of communication between the computer operators and each program.

(x) Protecting the use of hardware, software and data from misuse

9) What are the SDLC phases?

#1) Requirement Gathering and Analysis

During this phase, all the relevant information is collected from the customer to develop a product as per their expectations. Any ambiguities must be resolved in this phase only.

Business analyst and Project Manager set up a meeting with the customer to gather all the information like what the customer wants to build, who will be the end-user, what is the purpose of the product. Before building a product a core understanding or knowledge of the product is very important.

For Example, A customer wants to have an application which involves money transactions. In this case, the requirement has to be clear like what kind of transactions will be done, how it will be done, in which currency it will be done, etc.

Once the requirement gathering is done, an analysis is done to check the feasibility of the development of a product. In case of any ambiguity, a call is set up for further discussion.

Once the requirement is clearly understood, the SRS (Software Requirement Specification) document is created. This document should be thoroughly understood by the developers and also should be reviewed by the customer for future reference.

#2) Design

In this phase, the requirement gathered in the SRS document is used as an input and software architecture that is used for implementing system development is derived.

#3) Implementation or Coding

Implementation/Coding starts once the developer gets the Design document. The Software design is translated into source code. All the components of the software are implemented in this phase.

#4) Testing

Testing starts once the coding is complete and the modules are released for testing. In this phase, the developed software is tested thoroughly and any defects found are assigned to developers to get them fixed.

Retesting, regression testing is done until the point at which the software is as per the customer’s expectation. Testers refer SRS document to make sure that the software is as per the customer’s standard.

#5) Deployment

Once the product is tested, it is deployed in the production environment or first UAT (User Acceptance testing) is done depending on the customer expectation.

In the case of UAT, a replica of the production environment is created and the customer along with the developers does the testing. If the customer finds the application as expected, then sign off is provided by the customer to go live.

#6) Maintenance

After the deployment of a product on the production environment, maintenance of the product i.e. if any issue comes up and needs to be fixed or any enhancement is to be done is taken care by the developers.


Related Solutions

why the OSI model is important? and compare between TCP/IP and OSI model?
why the OSI model is important? and compare between TCP/IP and OSI model?
Why do we need a TCP/IP-OSI hybrid (5) model, when we already have the TCP/IP model?
Why do we need a TCP/IP-OSI hybrid (5) model, when we already have the TCP/IP model?
write 2000 words about OSI model and TCP/IP model
write 2000 words about OSI model and TCP/IP model
The TCP/IP Networking Model is called a "de facto" standard, while the OSI Networking Model is...
The TCP/IP Networking Model is called a "de facto" standard, while the OSI Networking Model is called a "training standard." What does this mean and why do we have both? Support your rationale.
A protocol architecture, such as the TCP/IP architecture or OSI, provides a framework for standardization. Within...
A protocol architecture, such as the TCP/IP architecture or OSI, provides a framework for standardization. Within such an architecture, the overall communications function is decomposed into a number of distinct layers with a special design principle used. What is this principle and how does it work? Describe in detail.
The OSI and the TCP/IP reference models are defined in seven and four layers, respectively. Research...
The OSI and the TCP/IP reference models are defined in seven and four layers, respectively. Research and discuss why this approach to a network model works better than a single layer model and provide a brief discussion on the benefits of the multi-layer approach. In what other everyday activities could layered approaches also be applied to explain its steps or activities?
WHAT ARE THE DIFFERENCES BETWEEN TCP/IP AND UDP IN TERMS OF OSI LAYERS AND DEFINITION ITSELF?...
WHAT ARE THE DIFFERENCES BETWEEN TCP/IP AND UDP IN TERMS OF OSI LAYERS AND DEFINITION ITSELF? PLEASE GIVE BREIFE EXPLANATIONS WITH PICTURES AND EXAMPLES WITH EASY ENOUGH TO MEMORIZE AND UNDERSTAND
Explain both TCP/IP and ISO/OSI communication protocol stacks, identifying each of the protocol layers. How does...
Explain both TCP/IP and ISO/OSI communication protocol stacks, identifying each of the protocol layers. How does encapsulation work in this layered design? In terms of ratio header/payload, estimate the optimistic accumulate overhead these protocols incur for each application “packet” that is transmitted over the Internet.
Why and how is TCP /IP important to a IT specialist ?
Why and how is TCP /IP important to a IT specialist ?
Define the following terms related to the Internet: TCP/IP, RFC, the World Wide Web, Internet Governance,...
Define the following terms related to the Internet: TCP/IP, RFC, the World Wide Web, Internet Governance, ISOC, W3C, IT‐ISAC, Internet and WWW Technology.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT