In: Computer Science
What is packet switching?
Ans:Packet switching is a method of transferring the data to a network in form of packets. In order to transfer the file fast and efficient manner over the network and minimize the transmission latency, the data is broken into small pieces of variable length, called Packet. At the destination, all these small-parts (packets) has to be reassembled, belonging to the same file. A packet composes of payload and various control information. No pre-setup or reservation of resources is needed.
What are the advantages and disadvantages of it?
Advantage of Packet Switching over Circuit Switching :
Disadvantage of Packet Switching over Circuit Switching :
Packet Switching don’t give packets in order, whereas Circuit Switching provides ordered delivery of packets because all the packets follow the same path.
Modes of Packet Switching :
What are TCP/IP and OSI suites?
TCP/IP, or the Transmission Control Protocol/Internet Protocol, is a suite of communication protocols used to interconnect network devices on the internet. TCP/IP can also be used as a communications protocol in a private network (an intranet or an extranet).
The entire internet protocol suite -- a set of rules and procedures -- is commonly referred to as TCP/IP, though others are included in the suite.
TCP/IP specifies how data is exchanged over the internet by providing end-to-end communications that identify how it should be broken into packets, addressed, transmitted, routed and received at the destination. TCP/IP requires little central management, and it is designed to make networks reliable, with the ability to recover automatically from the failure of any device on the network.
The two main protocols in the internet protocol suite serve specific functions. TCP defines how applications can create channels of communication across a network. It also manages how a message is assembled into smaller packets before they are then transmitted over the internet and reassembled in the right order at the destination address.
What are the major differences between the two?
Comparison Chart
BASIS FOR COMPARISON | TCP/IP MODEL | OSI MODEL |
---|---|---|
Expands To | TCP/IP- Transmission Control Protocol/ Internet Protocol | OSI- Open system Interconnect |
Meaning | It is a client server model used for transmission of data over the internet. | It is a theoretical model which is used for computing system. |
No. Of Layers | 4 Layers | 7 Layers |
Developed by | Department of Defense (DoD) | ISO (International Standard Organization) |
Tangible | Yes | No |
Usage | Mostly used | Never used |
The TCP/IP Model was developed before OSI Model, and hence, the
layers differ. Concerning the diagram, it is clearly seen that
TCP/IP Model has four layers namely, Network Interface, Internet,
Transport and Application Layer. Application Layer of TCP/IP is a
combination of Session, Presentation and Application Layer of the
OSI Model.
Definition of TCP/IP MODEL
TCP (Transmission Control Protocol) /IP (Internet Protocol) was
developed by the Department of Defense (DoD) project agency. Unlike
OSI Model, it consists of four layers, with each having its
protocols. Internet Protocols are the set of rules defined for
communication over the network. TCP/IP is considered as the
standard protocol model for networking. TCP handles data
transmission and IP handles addresses.
The TCP/IP suite is a set of protocols that includes TCP, UDP, ARP,
DNS, HTTP, ICMP, etc. It is robust, flexible and mostly used for
interconnecting computers over the internet.
The layers, TCP/IP, has are:
Definition of OSI Model
OSI (Open System Interconnect) model was introduced by ISO
(International Standard Organization). It is not a protocol but a
model which is based on the concept of layering. It has a vertical
set of layers, each having different functions. It follows a
bottom-up approach to transfer the data. It is robust and flexible,
but not tangible.
The seven layers of the model are:
Key Differences between TCP/IP and OSI Model
What organizations have a say in the future and trends of protocols and standards?
Organizations have a say in the future and trends of protocols because organizations have to develop products that conform to a certain set of standars.The protocols are the various rules which the end points in a communication use in order to exchang the protocol.
It’s important to note how standards get started, and why they’re necessary. For the most part, a new network or protocol standard comes about for one of two reasons. Either there is a new technology trend or an evolution in a preexisting technology that requires updates to an existing standard. IoT can be considered a new IT movement that is causing us to create new network standards that can handle the capacity and low-bandwidth communications of such devices. On the other hand, newly proposed WiFi standards are typically created to tear down limitations found in previous standards. Either way, standards are necessary so networking vendors can create hardware and software that are compatible with one another.
What are the trends in protocols going forward?
If there was one person that could be credited with all the change that is occurring in the network industry, it would be Martin Casado, who is currently a General Partner and Venture Capitalist at Andreessen Horowitz. Previously, Casado was a VMware Fellow, Senior Vice President, and General Manager in the Networking and Security Business Unit at VMware. He has had a profound impact on the industry, not just from his direct contributions (including OpenFlow and Nicira), but by opening the eyes of large network incumbents and showing that network operations, agility, and manageability must change. Let’s take a look at this in a little more detail.
OpenFlow
For better or for worse, OpenFlow served as the first major protocol of the Software Defined Networking (SDN) movement. OpenFlow is the protocol that Martin Casado worked on while he was earning his PhD at Stanford University under the supervision of Nick McKeown. OpenFlow is only a protocol that allows for the de-coupling of a network device’s control plane from the data plane (see Figure 1-1). In simplest terms, the control plane can be thought of as the brains of a network device and the data plane can be thought of as the hardware or application-specific integrated circuits (ASICs) that actually perform packet forwarding.
Figure 1-1. Decoupling the control plane and data plane with OpenFlow
RUNNING OPENFLOW IN HYBRID MODE
Figure 1-1 depicts the network elements having no control plane. This represents a pure OpenFlow-only deployment. Many devices also support running OpenFlow in a hybrid mode, meaning OpenFlow can be deployed on a given port, virtual local area network (VLAN), or even within a normal packet-forwarding pipeline such that if there is not a match in the OpenFlow table, then the existing forwarding tables (MAC, Routing, etc.) are used, making it more analogous to Policy Based Routing (PBR).
What this means is OpenFlow is a low-level protocol that is used to directly interface with the hardware tables (e.g., Forwarding Information Base, or FIB) that instruct a network device how to forward traffic (for example, “traffic to destination 192.168.0.100 should egress port 48”).
NOTE
OpenFlow is a low-level protocol that manipulates flow tables, thus directly impacting packet forwarding. OpenFlow was not intended to interact with management plane attributes like authentication or SNMP parameters.
Because the tables OpenFlow uses support more than the destination address as compared to traditional routing protocols, there is more granularity (matching fields in the packet) to determine the forwarding path. This is not unlike the granularity offered by Policy Based Routing. Like OpenFlow would do many years later, PBR allows network administrators to forward traffic based on “non-traditional” attributes, like a packet’s source address. However, it took quite some time for network vendors to offer equivalent performance for traffic that was forwarded via PBR, and the final result was still very vendor-specific. The advent of OpenFlow meant that we could now achieve the same granularity with traffic forwarding decisions, but in a vendor-neutral way. It became possible to enhance the capabilities of the network infrastructure without waiting for the next version of hardware from the manufacturer