In computer networks, bandwidth is the amount of data that can be carried from one point to another in a given time period; (generally a second). Network bandwidth is usually expressed in bits per second (bps); kilobits per second (kb/s), megabits per second (Mb/s), or gigabits per second (Gb/s). It is sometimes thought of as the speed that bits travel, however, this is not accurate.
For example, in both 100Mb/s and 1000Mb/s Ethernet, the bits are sent at the speed of electricity. The difference is the number of bits that are transmitted per second.
- A combination of factors determines the practical bandwidth of a network.
- The properties of the physical media.
- The technologies chosen for signalling and detecting network signals.
- Physical media properties, current technologies, and the laws of physics play a role in determining the available bandwidth.
Bandwidths connections can be symmetrical, which means the data capacity is the same in both directions to upload or download data, or asymmetrical, which means download and upload capacity are not equal. In asymmetrical connections, upload capacity is typically smaller than download capacity.
Modern networks support the transfer of huge numbers of bits per second. Instead of quoting speeds of 10,000 or 100,000 bps, networks normally express per second performance in terms like:
- 1Kbps = 1,000 bits per second
- 1Mbps = 1,000 Kbps
- 1Gbps = 1,000 Mbps
So, a network with a performance rate of units in Mbps is much faster than one rated in units of Kbps but slower than network performance of Gbps.
Examples of Performance Measurements
The standard bandwidth examples are the following:
56 kbit/s Modem / Dialup
1.5 Mbit/s ADSL Lite
1.544 Mbit/s T1/DS1
10 Mbit/s Ethernet
11 Mbit/s Wireless 802.11b
44.736 Mbit/s T3/DS3
54 Mbit/s Wireless 802.11g
100 Mbit/s Fast Ethernet
155 Mbit/s OC3
600 Mbit/s Wireless 802.11n
622 Mbit/s OC12
1 Gbit/s Gigabit Ethernet
2.5 Gbit/s OC48
9.6 Gbit/s OC192
10 Gbit/s 10 Gigabit Ethernet
100 Gbit/s 100 Gigabit Ethernet
Bits and Bytes
Storage capacity like hard disks, USBs is normally measured in units of kilobytes, megabytes, and gigabytes. In this type of usage, K represents a multiplier of 1,024 units of capacity. The following table defines the mathematics behind these terms:
The measurement of the transfer of a bit across the media over a given period is called throughput. It is a measure of how many units of information a system can process in a given amount of time. Due to some factors, it generally does not match the specified bandwidth in physical layer implementations. Many factors manipulate it, including the following:
- The type of traffic
- The amount of traffic
- The network devices between source and destination created by the number of
- Error rate
Latency is the amount of time, including delays; for data to travel from one given point to another. The networks with multiple segments, throughput can’t be faster than the slowest link in the path from source to destination. Even if all or most of the segments have high bandwidth; it will only take one segment in the path with low throughput to create a tailback to the throughput of the entire network.
The average transfer speed over a medium is often described as throughput. This measurement includes all the protocol overhead information; such as packet headers and other data that is included in the transfer process. It also includes packets that are retransmitted because of network conflicts or errors.
There is another measurement to evaluate the transfer of usable data as goodput. Goodput is the measure of usable data transferred over a given period of time. Goodput is throughput minus traffic overhead for establishing sessions, acknowledgements, and encapsulation. It only measures the original data.
Throughput vs Bandwidth
Bandwidth and throughput are both related to speed, but what’s the difference? bandwidth is the theoretical speed of data on the network, whereas throughput is the actual speed of data on the network. Throughput can only send as much as the bandwidth will allow, and it’s usually less than the bandwidth. Latency, irregularities in the signal (jitter), and error rate reduce the overall throughput.