CCNA Study Guide online cisco ccna training guide – collection of high quality CCNA tutorials.
These tutorials are prepared with single slogan; “provide best online CCNA training absolutely free”. This comprehensive collection of CCNA Study material is exactly; what you need to prepare for following exams CCNA Routing and Switching; Cisco Certified Entry Networking Technician (CCENT), Interconnecting Cisco Networking Devices – part 1 (ICND-1); Interconnecting Cisco Networking Devices – part 2 (ICND-2). CCNA certificate is a goal in your career journey. So to get this goal we arranged this exclusive CCNA training program in such a way that you get advantage from these CCNA tutorials in exam and after getting certificate in your job life.
File Transfer Protocol is another standard Internet protocol for transmitting files between computers over TCP/IP connections. It is an application layer protocol. It was first created in 1971 to transfer data between a client and a server. To use this protocol, a computer’s FTP client application is required to send and receive data from a server running an FTP daemon (FTPd). The File Transfer Protocol (FTP) is client-server protocols that work on two channels between client and server:
· Command channel for controlling the conversation between host and server
· Data channel for transmitting and receiving files between client and server
Clients initiate a connection to the servers to manage traffic using port 21, consisting of client commands and server replies. After the client commands and the server replies, the client establishes the second connection to the server to transfer actual data using TCP port 20. The connection to port 20 is established every time there is data to be transferred. The figure below illustrates the File Transfer Protocol (FTP) connection.
Depending on user rights, the FTP client can download, upload, delete, rename, move, and copy data on a server. A user typically needs to log on to the FTP server, such as https://www.goanywhere.com/solutions/secure-ftp, while some servers use anonymous users for some or all of their content available without login.
The File Transfer Protocol sessions work in two modes: passive and active. In active mode, when a client opens a session via a command channel request, the server then opens a data connection back to the client and starts transferring data.
In passive mode, the server, as an alternative, uses the command channel to send the client the information required to open a data channel. Because the client has initiated all connections in the passive mode, it works better across firewalls and NAT. The FTP client can work via a simple command-line interface with a graphical user interface (GUI), and the Web browsers can also serve as FTP clients.
The Server Message Block (SMB) is a network protocol that allows the host to share data within the same network. It shares directories, files, printers, and serial ports as quickly as on the local computer. It is a request-response protocol that uses TCP port 445 for communication. All the messages of the Server Message Block protocol have a standard format: a fixed-sized header with a variable size parameter and a data component.
The Server Message Block protocol suite is comparatively easy. It includes commands for resource operation that you might do on a local disk or printer, such as:
Creating new files and directories
Deleting files and directories
Opening and closing files
Searching for files and directories
Reading, writing, and editing files
Queuing and de-queuing files in a print spool
The Server Message Block servers make the file systems and resources available to the clients in the network. The clients make SMB requests for the available resources on the server using the commands, and the servers create Server Message Block response messages. The following are the SMB message types:-
· Initiate, authenticate, and terminate the sessions
· Control access to file and printer
· Allow to send and receive messages using the application
File sharing and printer sharing are both primary services of Microsoft networking. With the release of Windows 2000, Microsoft changed the original structure for using SMB. Before Windows 2000, the Server Message Block services used a non-TCP/IP protocol to execute name resolution, but after Windows 2000, all Microsoft products use DNS naming, which allows TCP/IP protocols to support SMB resource sharing. The figure below illustrates the establishment of the SMB protocol connection.
Using the Server Message Block, once the connection is established, the client user can reach the resources on the remote end as if the resource were local to the client host.
Although the Server Message Block protocol was initially created for Windows, it can now also be used by Linux, Unix, and Mac OSX, using software called Samba. Samba, Linux, Mac, Windows, and Unix computers can share duplicate files, folders, and printers.
What does SMB stand for in business? SMB, or Server Message Block, is a network protocol used in business settings to share files, printers, and other resources in a networked setting.
Email is one of the primary services running on the Internet. So, what application, protocol, and services are required for email? The email server stores email messages in a database. Email uses the store-and-forward method for sending and storing messages. Email clients communicate with the servers running mail services to send and receive an email. The client-connected server communicates with other mail servers to transport messages from one domain to another.
When sending an email, the client does not communicate directly with another email client. However, both mail clients rely on the mail server to transport messages. The Email process uses three types of protocols: Simple Mail Transfer Protocol (SMTP), Post Office Protocol (POP), and Internet Messaging Access Protocol (IMAP). The application layer process sends mail using SMTP, but a client retrieves email using POP or IMAP.
Simple Mail Transfer Protocol (SMTP) Operation
The SMTP message formats required a message body with the message header. The body of the message can hold any amount of text; the message header must have a properly formatted recipient email address and a sender address.
When a client sends an email message, the client SMTP process connects with a server SMTP process on port 25. When the client and server set up a connection, then the client tries to send the email message to the server. After the server receives the email message, it either places the message in a local account in case of the local recipient or forwards the message to another mail server for delivery.
If the destination email server is busy or not online, then the SMTP spool message will be sent later. The server periodically checks for the queue and attempts to send messages again. When the message expiration time is over and still in the queue, the message is returned to the sender as an undeliverable message.
The figure above illustrates the technique of message sending. The client sends an email message to admin@fschub.com. The SMTP / POP server-1 will receive the message. Server-1 will check the recipient’s list of local recipients. If found, the message will be placed on the local account. The message will be forwarded to SMTP / POP server-2 if not found.
Post Office Protocol (POP) Operation
The POP server passively listens on TCP port 110 for client connection requests. When a client needs to use the POP service, it requests to start a TCP connection with the server. On establishing a connection, the POP server welcomes the client.
When the client and POP server set up a connection, both exchange commands and responses until the connection terminates. With POP, when clients download email messages, the server removes these messages.
The POP server has a temporary holding area for mail until it is downloaded to the clients. Because there is no central place for email message storage, it is not an attractive choice for a small business that needs centralized storage for backup.
Internet Messaging Access Protocol (IMAP) Operation
The Internet Message Access Protocol (commonly known as IMAP) is another protocol that describes a technique for retrieving email messages from a remote mail server. An IMAP server usually listens on port 143, and IMAP over SSL is assigned port number 993. Unlike POP, when the user connects to an IMAP server, mail copies are downloaded to the client application.
The original messages are reserved on the server until the user explicitly deletes them. Users view copies of the messages in their email client software.
The server stores Incoming email messages in the recipient’s email box. The user retrieves the messages with an email client that uses one of several email retrieval protocols. Most clients support the standard protocols, such as SMTP for sending an e-mail message and POP and IMAP for retrieving email.
The IMAP client can make a file hierarchy on the server to organize and store emails. When a user wants to delete a message, the server synchronizes that command and deletes the message from the mail server.
Differences Between POP, SMTP, and IMAP
The following table summarizes the key differences between POP, SMTP, and IMAP regarding their functions, port numbers, security, email storage, syncing capabilities, offline access, message management, server load, everyday use cases, and examples of applications or services that utilize each protocol.
Feature
POP (Post Office Protocol)
SMTP (Simple Mail Transfer Protocol)
IMAP (Internet Message Access Protocol)
Purpose
Retrieve emails from a server
Send emails to a server for delivery
Access emails stored on a server
Port Number
110
25
143 (without SSL/TLS), 993 (with SSL/TLS)
Security
Typically lacks encryption
Can use encryption (SMTPS)
Supports encryption (IMAPS)
Email Storage
Downloads emails to a local device
Does not store emails
Leaves emails on the server
Syncing
Generally does not sync emails across devices
Does not sync emails
Synchronizes emails across multiple devices
Offline Access
Provides limited offline access to downloaded emails
Does not provide offline access
Provides full offline access to emails
Message Management
Limited capabilities for organizing messages
Primarily focuses on sending messages
Offers extensive message management features
Server Load
Relatively low as emails are typically removed from the server after retrieval
Moderate, as it involves transferring emails to different servers
Relatively high, as emails are stored and managed on the server
Common Use Cases
Used when internet connectivity is limited or sporadic
Essential for sending emails from email clients
Preferred for accessing emails from multiple devices
HTTP is an abbreviation for HyperText Transfer Protocol, whereas HTML is HyperText Markup Language. HTTP is the protocol, whereas HTML is a document in a hypertext markup language. When an address is typed into a browser, the browser establishes a connection to the web service running on the server. The protocol for establishing a connection is HTTP.
HyperText Transfer Protocol (HTTP) is the primary protocol the World Wide Web uses. This protocol defines how messages are formatted and transmitted and what actions Web servers and browsers should take in response to various commands.
The URL(Uniform Resource Locator) and URI(Uniform Resource Identifier) are the names most people use with their web addresses. If we want to open a web address https://networkustad.com/home, we can examine how an address is opened in the browser.
Protocol – HTTP
Server Name – networkustad.com
The specific filename that was requested is “home.”
As shown in the Figure, when entering the URL mentioned in the browser, the browser checks with a name server to convert fschub.com into a numeric IP address, which it uses to connect to the server. The browser then sends a GET request to the HTTP server and asks for the /home.html file. The server then sends the HTML code of this particular page to the browser.
In conclusion, the browser reads the HTML code, formats the page for the browser window and shows it to the user. HTML is the leading standard that controls how the World Wide Web works. It covers how the Web pages are formatted and displayed on the user screen.
HTTP is a request/response protocol. When a client sends a request to a web server, the protocol that specifies the message type is HTTP. Three common message types are GET, POST, and PUT.
GET – A host request for data, generally a webpage request
POST – Uploads data files to the webserver
PUT – Uploads resources or content such as images, videos, and audio to the webserver.
Hypertext transfer protocol is an extraordinarily powerful protocol, but it is not secure. HTTP sends request messages to the server in plain text that can be intercepted and read anywhere. The responded HTML pages are also unencrypted and unsecured pages.
The HTTPS protocol is used to secure communication across the internet, which is too secure from the Hypertext transfer protocol. This protocol uses authentication and encryption to ensure client and server data. It uses the same client request-server response process as HTTP, but the data travel between client and server is encrypted with SSL (Secure Socket Layer)
Client-server and Peer-to-Peer terms are often used in computer networks. Both are the network models that we use in our day-to-day lives. The Client-Server model focuses on information sharing, whereas the Peer-to-Peer network model focuses on connectivity to remote computers. The detailed explanation of both models is the following:-
Client-Server Network Model
In the client-server network model, the device that requests information is called a client, and the device that responds to the request is called a server. The Client-server processes work in the application layer. The client device starts the connection by requesting data from the server. The server can either accept or reject the connection. If the connection is accepted, the server establishes and maintains a connection with the client over a specific protocol.
The application layer protocols explain the data exchange format between clients and servers. The data exchange between the server and client may also require user authentication and the identification of a data file to be transferred.
The email server is one of the best examples of the client-server model, which sends, receives and stores email. The client on a remote location issues a request to the email server for any mail to read. The server then replied by sending the requested email to the client. The data stream from the client to the server is called upload, and the data stream from the server to the client is called download. The figure below illustrates the email client-server Model.
Other examples of servers are web servers, FTP servers, TFTP servers and Online multiplayer gaming servers. Every one of these servers provides resources to the client. Most servers have a one-to-many relationship with clients, meaning a single server can provide resources to multiple clients simultaneously.
Peer-to-Peer Network Model
Like the Client-Server Model, the peer-to-peer network model has no dedicated server; the data is directly accessed from a peer device without a server. The P2P network model has the parts of P2P networks and P2P applications. Both have the same features, but they are a little different in practice.
In this model, two or more hosts are connected using a network and can share resources such as printers and files without having a dedicated server. Each connected end device is known as a peer. The peer can work both as a server and a client. One host might suppose the server role for one transaction simultaneously as serving as a client for another. In the P2P networking model, the client and server roles are set on a per-request basis.
Peer-to-Peer(P2P) Applications
Due to P2P application devices in this model acting both as clients and servers within the same communication, every client is a server and every server a client. The P2P applications need each end device to provide a user interface and run background P2P services.
The hybrid system uses many P2P applications and has decentralized resource sharing. The index database is stored in the centralized directory server. The index is the address of the resource location. Each peer gets access to the index server to get the location of a resource on another peer.
Common Peer-to-Peer (P2P) Applications
Every computer in the network running the P2P application can act as a client and server for other computers running the P2P application. Common P2P networks are the following:
The Gnutella protocol is also used in some P2P applications, where each user shares entire files with all other users. Many Gnutella client applications are available, as well as gtk-Gnutella, Wireshark, Shareaza, and Bearshare.
Many P2P applications permit users to share pieces of many files simultaneously. Clients of this application use a small file called a torrent file to locate other users who have pieces that they need so that they can connect directly to them. This torrent also contains information about tracker computers that track which users’ computers have what files. The torrent clients simultaneously inquire for pieces from multiple users, recognized as a swarm. This is a BitTorrent technology. Many BitTorrent clients exist: BitTorrent, uTorrent, Frostwire, and BitTorrent.
Sharing any type of file between users with the help of this P2P software, more files are copyrighted. Usage and distribution of these files without permission from the copyright holder is against the law. Copyright violation is an offence and results in criminal charges and civil lawsuits.
The application layer is the topmost layer of the OSI Model. As shown in the figure below, the upper three layers of the OSI model (application, presentation, and session) describe the functions of the single TCP/IP application layer. The application layer enables humans or software to get access to the network. It also serves as the source and destination of communications across data networks.
The application layer applications, services, and protocols enable humans to interact with the data network in a way that is useful. The applications are computer software programs with which the user interacts and start the data transfer process at the request. The services are programs that run in the background and give the link between the application layer and the lower layers.
The Protocols give a structure of rules that make sure services running on a particular device, and can send and receive data from a range of different network devices. The client should request from the server, the delivery of data packets over the network. In the case of a P2P network, the affiliation of client/server establishes, according to the source device and which the destination device is at that time of establishes. The conversions are exchanged between the application layer services at both end devices in accordance with the terms of the protocol to set up and use these relations.
TCP/IP Application Layer Protocols
The end devices usually require application layer protocols. For example, the end devices receive web pages using HTTP (hypertext transfer protocol) application, which is one of the widely used application protocols.
HTTP is the base for the World Wide Web. When a browser requests a web page, the protocol sends the name of the required page to the server. The server then sends the requested page to a client. For example, the servers SMTP (simple mail transfer protocol), IMAP(internet messaging access protocol), and POP (post office protocol) keep up sending and receiving the email. SMB(server message block), FTP (file transfer protocol) and TFTP(trivial file transfer protocol) allow clients to share files.
P2P applications make it easier to share media in a distributed fashion. DNS (domain name system) resolves the IP address and name address for better human understanding. Clouds are remote locations that host applications and store data so that end-users do not need as many local resources, and the users can effortlessly access content from a different place.
The TCP/IP application protocols show the format and control information required for many general Internet communication functions. Both source and destination devices use the application layer protocols during a communication session. The application layer also enables hosts to work and play over the Internet. The figure below illustrates the application layer for both the OSI and TCP/IP models.
User Datagram Protocol (UDP) is an optional communications protocol for data transmission. It is used mostly for establishing low-latency and loss tolerating connections between applications on the internet. The IP address is working both with TCP and UDP and some time referred to as TCP/IP and UDP/IP. Both TCP and UDP send short packets of data, called datagram.
User Datagram Protocol (UDP) Low Overhead vs Reliability
User Datagram Protocol provides the basic transport layer functions. It sends the packets, with lower bandwidth overhead and latency than TCP. User Datagram Protocol (UDP) is not a connection-oriented protocol so it does not offer the sophisticated retransmission, flow control and sequencing mechanism for lost and out of order packets. So User Datagram Protocol (UDP) is not providing a reliability like TCP. But this does not mean that application that uses User Datagram Protocol (UDP) are forever unreliable and substandard. It only means that these functions are not provided by the transport layer protocol and must be implemented in a different place if required.
Because of low overhead, UDP is the best protocol for network applications in which apparent latency is critical such as gaming, voice and video communications, which can bear some data loss without badly disturbing apparent quality. Like TCP, UDP does not set up a connection before sending data, it just starts sending data when required.
UDP Datagram reassemble
UDP datagram is arriving at the destination using different routes, and these datagrams arrive in the wrong order. The UDP does not follow sequence numbers just like TCP. It has no mechanism to reorder the datagram into their transmission order. So, the UDP reassembles the data in the exact order. If the sequence is important to the application; the application should identify the right sequence number and decide how the data should be processed.
UDP Server Processes and Requests
UDP-based server applications assign well-known or registered port numbers just like TCP When these applications and processes are running on a server; they accept the data matched with the assigned port number. When UDP receives a datagram destined for one of these ports; it forwards the application data to the proper application based on its port number.
UDP Client Processes
The client application will ask a server process to start communication between the server and the client. The UDP client process selects a port number from the range of port numbers randomly. The destination port on the server is generally the well-known or registered port number assigned to the server process. Once the client selects the source and destination ports, the header of all datagram uses this selected pair of ports. For returning the data from the server to the client the destination and source port; are reserved for the datagram header.
Transmission Control Protocol accepts data from a stream, divides it into small chunks, and adds a TCP header creating a TCP segment. The TCP segment is encapsulated into an Internet Protocol datagram (IP datagram) and exchanged with peers. The TCP Reliability and TCP Flow Control are important for ensuring data received completely and also in the correct order.
TCP Reliability
The TCP segments possibly will arrive at their destination out-of-order. For understanding, the original message to the receiver, the data in these out-of-order segments is reassembled to the correct order. Each segments header has assigned a sequence number to get this goal. The sequence number represents the first data byte of the TCP segment.
During the established session, the first sequence number (ISN); is set. This ISN represents the opening value of the bytes for this session which is transmitting to the receiving side application. When data is transmitting during the established session. The sequence number is increasing by the number of transmitted bytes. This data byte tracking enables every segment to individually find and acknowledged.
The missing segments can identify and also reported. The ISN is effectively a random number. This is to avoid certain types of malicious attacks. For simplicity, we will use an ISN of 1 for the examples. sequence numbers also show how to reassemble and reorder received segments, as shown in the figure.
The receiving TCP process places the data from a segment into a receiving buffer. Segments are in the proper sequence order and passed to the application layer when reassembled. The wrong order sequence number remains hold for later processing. if, when the segments with the missing bytes reach the destination, these segments are processed in proper order.
TCP Flow Control
TCP also guarantees a reliable communication channel over an unreliable network. When someone sending data to another host, the host can receive the packets out of order; the host can lose the packets or the network can be congested or the receiver node can be overloaded. When we are sending some application data, we usually don’t need to deal with this complexity, we just write data to a socket and TCP flow control makes sure the packets are delivered correctly to the receiver node. The TCP flow control is the service of TC.
Window Size and Acknowledgment
TCP flow control also checks the quantity of data that the destination host can receive and process reliably. It is is the service that maintains the reliability of TCP transmission by adjusting the rate of data flow between the source host and destination host for an established session. To achieve this, the TCP header includes a 16-bit field called the window size.
The figure below illustrates an example of window size and its acknowledgements which is the process of flow control. The window size is the number of bytes that the destination device of a TCP session can accept and process a single time. In this example, the host-B’s initial window size for the TCP session is 1,500 bytes.
Starting with the first byte, byte number 1; the final byte host-A can send without receiving acknowledgements is byte 1,500. This is host-A’s sending window. The window size is also included; in every TCP segment so the receiver can adjust the window size at any time depending on buffer availability.
The figure illustrates, that the source is transmitting 1,500 bytes of data within each TCP segment. This is calling MSS (Maximum Segment Size). The primary window size is settled upon when the TCP session is established; during the three-way handshake. The source host must bound the number of bytes sent to the destination host based on the destination’s window size.
Only after the source host receives an acknowledgement of receiving all the bytes at the destination host, can it continue sending more data for the session. Usually, the destination host will not wait for all receiving all bytes for its window size before replying with an acknowledgement. When the destination bytes are received and processed; the destination host will send acknowledgements to inform the source host that it can continue to send additional bytes.
Usually, the server will wait until receiving all 4,500 bytes before sending an acknowledgement. This means the host can correct its send window as it receives acknowledgements from the server. As shown in the figure when host-A receives an acknowledgement with the acknowledgement number 3,001, it sends window will increment another 4,500 bytes (the size of the host-B;s current window size) to 7,500. host-A can now continue to send up to another 4,500 bytes to host B as long as it does not send past its new send window at 7,500.
The process of the destination host sending acknowledgements as it processes bytes; received and continual adjustment of the source’s send window is known as sliding windows. If the availability of the destination’s buffer space decreases; it may reduce its window size to inform the source to reduce the number of bytes it should send without receiving an acknowledgement. The window size determines the number of bytes that can be sent before expecting an acknowledgement. The next expected byte is calling an acknowledgement number.
Congestion Avoidance
When congestion occurs on a network, it results in packets discarding due to overload on the router. When packets containing TCP segments don’t reach their destination, they leave the packet to acknowledgement. By determining the rate at which TCP segments sending but not acknowledging; the source host can suppose a certain level of network congestion.
One of the main principles of congestion control is avoidance. TCP tries to sense symbols of congestion earlier than it happens and to reduce or increase the load into the network accordingly. The option of waiting for congestion and then reacting is not as good as because once a network saturates; it does so at an exponential growth rate and decreases on the whole throughput enormously.
It takes a long time for the queues to consume, and then all senders host again repeat this phase. By taking a practical congestion avoidance approach; the pipe is kept as full as possible without the threat of network saturation. The key is for the sender host to recognize the state of the network and client and to control the amount of traffic injected into the system.
Whenever there is congestion, retransmission of lost segments from the source will take place. If the retransmission does not control properly the extra retransmission of the TCP segments can make the congestion even worse. Not only are new packets with TCP segments introduce into the network; but the feedback effect of the retransmits TCP segments lost will also add to the congestion. To avoid and control congestion, TCP employs several congestion management mechanisms, timers, and algorithms.
If the source host did not receive an acknowledgement or the acknowledgement not receive timely. Then it can reduce the number of bytes it sends before receiving an acknowledgement. Note that it is the source host that is decreasing the number of unacknowledged bytes it sends and not the window size determined by the destination. The figure above illustrates the TCP congestion control. The acknowledgement number is for the next; expected byte, not for the segment.
The TCP 3-way handshake also is known as TCP-handshake. It contains three message handshake or SYN, SYN-ACK, ACK. It is the method for TCP/IP connection over an IP-based network. TCP’s 3-way handshaking is often called SYN, SYN-ACK, ACK technique because there are three messages transmitted by TCP to negotiate and start a TCP session between two hosts.
Hosts on the network read all data segments within a session and exchange information about what data is received using the information in the TCP header. TCP is a full-duplex protocol, where each logical connection represents two one-way communication streams or sessions. To set up the connection, the hosts do a TCP 3-way handshake. Control bits in the TCP header show the progress and status of the connection.
The TCP handshake method is designed for hosts attempting to communicate and can negotiate the limits of the TCP socket connection before transmitting data. This is the 3-way handshake process also designed for both ends can start and negotiate separate TCP socket connections at the same time.
It is being able to negotiate multiple TCP socket connections in both directions at the same time allows a single physical network interface, such as Ethernet, to be multiplexed to transfer multiple streams of TCP data simultaneously. The figure below illustrates the TCP 3-way handshake.
The steps of the TCP 3-way handshake is following:-
The host sends a TCP SYNchronize packet to the server
The server receives Host’s SYN
The server sends a SYNchronize-ACKnowledgement to host
The host receives the server’s SYN-ACK
The host sends ACKnowledge to Severs
The server receives ACK.
Both sending data.
When data sending is complete then the sessions are going to close because of finishing sending data. The connection and session mechanisms also enable TCP’s reliability. For termination of the connection, another 3-way communication is going to perform and tear down the TCP socket.
This setup and teardown of a TCP socket connection is part of what qualifies TCP as a reliable protocol. TCP also acknowledges successful data receiving and guarantees the data is reassembling in the correct order.
All application processes on the server use different port numbers. The network administrator can use default ports or configure ports manually for these applications. We cannot use the same port numbers on the same server for different applications. For example, we cannot configure the same port number for the FTP and web servers (for example, port 80 or port 21 for both).
An active server application requiring an open port on the server means the transport layer accepts and processes segments addressed to that port number. Every client’s incoming request is accepted at the correct socket address, and the application passes the data to the server. So, there are possibilities for open ports simultaneously on the same server, one for each active server application.
Connection Establishment
When two persons get together, they often welcome each other by shaking hands. Establishing a connection in networking is similar to the handshaking and welcoming of friends. The host and server, as well as two hosts, set up a TCP connection. The client requests a client-to-server communication session with the server.
When the server receives an ask, it acknowledges the client-to-server communication session and requests a server-to-client communication session. Then, the initiating client acknowledges the server-to-client communication session. The figure below illustrates the establishment of the TCP connection.
Session Termination
After the connection is established and the job is completed, the network terminates the connection. For connection termination, the FIN control flag must be set in the segment header. To end each one-way TCP session, a two-way handshake, with a FIN segment and an Acknowledgment (ACK) segment, is used. So, to end a single TCP conversation, four exchanges are required to end both sessions. The figure below illustrates the session termination process.
1 – When a host sends all data, and no more data remains to be sent in the stream, it sends a segment with the FIN flag set to the server.
2 – The Server sends an ACK to acknowledge the receipt of the FIN to finish the session from host to server.
3 – The server sends a FIN to the host to finish the server-to-host session.
4 – The host responds with an ACK to acknowledge the FIN from the server.
When segment acknowledgement is received, the session is terminated.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.