You are here

Latency

Dr John Naylon, Chief Technology Officer, CBNL

One of the most common questions we’re asked by new customers is,

“How do I decide when to deploy point-to-multipoint (PMP) and when to deploy point-to-point (PTP) for my backhaul?”

This is a great question, because the two technologies are very complementary to each other. 

An engineering perspective

From an engineering perspective, in making the choice between PMP and PTP for a given link, we are seeking to maximise efficiency and utilisation of the equipment and the RF channel while satisfying a set of requirements for throughput, latency and link availability. 

In economic terms, this translates into choosing the technology which gives the lowest total cost of ownership while satisfying those requirements.

Traffic characteristics

An excellent way to make the choice between PMP and PTP is to look at the characteristics of the data traffic we want to carry. 

I’m going to consider mobile broadband backhaul traffic here, because that’s what the majority of our customers use our technology for today. 

In a future post I will talk about small cell backhaul traffic.

As ever, the NGMN has some useful information we can use, in the whitepaper Guidelines for LTE Backhaul Traffic Estimation.

This paper describes (§2.2) the initially counter-intuitive result that the peak throughput for an eNodeB actually occurs, not during busy hour, but during quiet time. This is because:

“During busy times, there are many UEs being served by each cell. The UEs have a range of spectrum efficiencies, depending on the quality of their radio links. Since there are many UEs, it is unlikely that they will all be good or all be bad, so the cell average spectral efficiency (and hence cell throughput) will be somewhere in the middle.

During quiet times however, there may only be one UE served by the cell. The cell spectrum efficiency (and throughput) will depend entirely on that of the served UE, and there may be significant variations … the scenario under which the highest UE and cell throughputs occur [is]: One UE with a good link has the entire cell’s spectrum to itself. This is the condition which represents the ‘headline’ figures for peak data rate”.

Figure 4, reproduced here, illustrates this point:

Illustration of cell throughput during busy and quiet times

The paper goes on to give the following peak and mean traffic figures for a number of LTE configurations:

Figure 5: Mean and peak (95%-ile) user plane traffic per cell for different LTE configurations

Understanding the peak-to-mean ratio

What we can immediately see from this figure is that the peak-to-mean ratios of the traffic in the dominant, downlink, direction are very large, ranging from about 4:1 to almost 6:1. 

This agrees well with measurements we see from real networks. 

For example, the following from a very busy HSPA+ network show the peak-to-mean ratio of the backhaul traffic for each node B (on the y-axis) plotted against the peak backhaul demand for that node B on the x-axis.

Peak Backhaul Demand (Mbps)

When traffic has a high peak-to-mean ratio like this, we call it “bursty”, as opposed to “smooth” when the peak-to-mean ratio is close to 1. 

Data traffic in general, for example on LANs and residential internet access connections as well as mobile networks, is bursty; and this presents a difficulty in carrying it efficiently on PTP links, as shown here:

Bursty traffic is hard to carry effeciently

The problem here is that a PTP link with a single traffic source (a ‘tail link’ in the backhaul network) needs to be dimensioned to carry the peak traffic, but there is only a single source of offered load. 

Therefore the utilisation of the link (or efficiency) is equal to the mean offered load divided by the capacity, or in other words the reciprocal of the peak-to-mean ratio of the traffic. 

So if my traffic has a peak-to-mean ratio of 4:1, the maximum utilisation of a PTP link carrying that traffic is ¼, or 25%. 

In the chart above, you can visualise this as all the white space below the red line being wasted bandwidth, which is provisioned but unused.

It’s important to say that this is not a failing in PTP systems in any way – it is simply that the characteristics of the traffic are not well suited to the static bandwidth provisioning that PTP provides.

The advantage of a PMP system is that it can serve multiple sources of offered load simultaneously. 

The bandwidth of the shared RF channel is dynamically allocated to different sources as required. 

Conceptually, then, the peaks and troughs from different traffic sources ‘cancel out’ to some extent, as we illustrate in the following live network example showing eight nodeBs being backhauled by a single VectaStar Gigabit sector.

Multipoint backhaul packet switched not circuit switched

Here we are also relying on another property of the traffic, namely that peak demands for different nodeBs do not occur at exactly the same time. 

We discuss this at greater length in The Effect of System Architecture on Net Spectral Efficiency for Fixed Services.

Liberating spectrum to meet growing capacity demands

A useful analogy here is to think about a bank with deposit accounts.

Banks operate a fractional reserve system, meaning that they are only able to repay a defined fraction of the total of deposits at any given time.

This therefore relies on the observation that, statistically, not everybody goes to the bank and withdraws all their savings at the same time.

When this assumption breaks down, there is a ‘run on the bank’.

In a similar way, we rely on the observation that, statistically, not every node B requires its theoretical peak backhaul throughput at the same time.

When this assumption breaks down, things are a bit less dramatic however – we simply discard some low priority traffic.

This is perceived (if at all) by users as a temporary reduction in internet browsing speed.

Crucially, we can dimension the system in such a way as to set the probability that this occurs to a value of our choosing.

The advantage of fractional reserve banking is that it liberates dormant capital for further investment and lending.

Likewise, the more efficient use of RF channels in PMP systems liberates dormant electromagnetic spectrum (provisioned but unused, as in the example above) for use addressing the ever-growing capacity demands of modern mobile networks.

Conclusion

In conclusion, then, some brief rules of thumb for when to deploy PTP and when to deploy PMP are as follows:

Deploy PTP…
… when traffic is smooth (voice dominated)
… when traffic has already been aggregated
… in the middle mile of backhaul
… for long distance links
… when spectrum is uncongested or inexpensive
Deploy PMP…
… when traffic is bursty (data dominated)
… to create an on-air traffic aggregation
… for tail links (last mile)
… for dense deployments
… when spectrum is congested or expensive

 

Published 20 March 2013 in Backhaul, Small cells
Tags: Small cells, Small Cell Forum, Latency, NGMN
 Julius Robson, Wireless Technology Specialist, CBNL

Julius Robson, Wireless Technology Specialist, CBNL

 Julius Robson, Wireless Technology Specialist, CBNL

The Small Cell Forum has just published Release One, the first in a series which aim to provide “all you need to know” packages to help operators deploy small cells, and for solution providers to understand technology trends and requirements.

Although the main focus of this first release is the ‘Home’ environment (i.e. residential femtocells), it does also cover other types of small cells, namely enterprise, rural and most importantly for us, metro. 

As Vice Chair of the SCF’s backhaul SIG, CBNL has been heavily involved with the compiling of the Forum’s Backhaul white paper, which covers use cases, requirements and an in-depth discussion of the different types of solutions. 

Being such a hot topic, there was intense interest and contribution from a wide range of viewpoints, so the paper has ended up being a weighty 80 page reference, rather than a brief overview. 

That said, the key message is quite clear: backhaul is not a barrier to small cell deployment. 

By looking in detail into the characteristics of the different wired and wireless solutions, it is found that used together, they can meet the needs for the different use cases envisaged.  

The work draws on the NGMN’s operator view of ‘Small Cell Backhaul Requirements’, and adds to this detailed descriptions of the different solution categories. 

Highlights for me would have to be a new discussion on latency requirements, and the section on how we split the wireless solutions into different categories – based largely on the different combinations of carrier frequency and spectrum licensing arrangement.  

A wealth of information is then available on each type of solution, but summary tables are included to concisely show how each one matches up to key requirements.

This paper should help operators in selecting appropriate tools from the famous backhaul toobox.

 

See below for some links to this and other essential references for your small cell backhaul reading list.

[1] “NGMN Alliance Small Cell Backhaul Requirements”, NGMN Alliance, Jun 2012,

The operator consensus view of requirements for small cell backhaul

[2] “Backhaul Technologies for Small Cells, use cases, requirements and solutions”, Small Cell Forum, Feb 2013,

SCF’s new reference paper, builds on NGMN requirements adding detailed description of the different solutions’ characteristics.

[3] "Five ways to deploy small cells and the implications for backhaul”, CBNL, Aug 2012,

Describes how different operator motivations to deploy small cells (Capacity, QoE, Coverage) lead to different deployment styles, which in turn point to different choices from the backhaul toolbox.
              
[4] “Small Cell Forum release structure and roadmap”,  Small Cell Forum, Feb 2013,

Describes the thinking behind the SCF’s release programme the topics addressed and future plans

Published 11 March 2013 in Backhaul
Tags: Latency, FDD, TDD, duplexing scheme, Capacity

Dr John Naylon, Chief Technology Officer, CBNL

Dr John Naylon, Chief Technology Officer, CBNL

In a recent blog I talked about the differences between TDD and FDD systems and how to compare system capacities correctly.  

The other big difference between TDD and FDD systems is in the overall system latency.

Before going into the detailed differences, what do we mean by latency, and why is it important? 

What is “good” latency, “ok” latency and “poor” latency in mobile backhaul terms?

Latency measures how long it takes a packet of data to travel from one point in the network to another. 

It’s very common in mobile networks to talk about the round-trip latency between a node B or e-node B (at the edge of the operator’s RAN) and the packet core.

As the ‘round-trip’ suggests, this is the time taken for a packet to transit from the node B to the core and for the response to come back, not including any time processing the packet and generating the response. This is same as the ‘ping time’ you often hear gamers talking about.

Round-trip latency is an important design parameter for modern mobile networks because it has a very large effect on the end user’s perceived quality of experience (QoE).

We’ve all experienced the ‘lag’ when our smartphone first tries to access the data network. 

Reducing bearer access latency on the handset to network interface (the Uu interface) in order to improve QoE was a major design goal of LTE. 

This has been so successful that backhaul latency is now under the spotlight.

The NGMN Alliance recommendation in its document NGMN Optimised Backhaul Requirements is that the total round-trip latency budget for the network between a node B and the packet core must be 10ms or less, and should be less than 5ms. 

The total of this budget allocated to the tail link backhaul, therefore, has to be a small proportion of this budget. 

The recent 'Backhaul technologies for small cells' study from the Small Cell Forum classifies backhaul system latency as follows:

Latency (ms) Category
< 1ms Good
1-5ms Ok
> 5ms Poor

 

So what does this have to do with FDD and TDD systems?

A TDD system uses the same frequency for upstream and downstream transmissions. 

So at either end of the link, a radio is essentially in “send” mode or “receive” mode. 

What happens when a packet arrives at the radio link that we want to send, but the radio is in “receive” mode?  Well, simply enough, it has to wait until the radio is back in “send” mode. 

In a round-trip, the packet will have to wait for the radio to be in “send” mode twice!

In contrast, in an FDD system, we are simultaneously in “send” mode on one frequency and “receive” mode on another. So when a packet arrives at the radio link we can send it immediately.

For this reason, FDD systems in general have lower latency than TDD systems. 

VectaStar, the market leader in multipoint microwave, has an average round-trip latency of 0.7ms. 

In comparison, TDD systems quote figures from 4ms to 12ms one-way.

Equally importantly, the amount of delay variation introduced by FDD backhaul is lower. 

This is important when we use packet timing techniques for synchronisation, but that’s a topic for another day.

Published 04 December 2010 in Backhaul, Research
Tags: LTE, X2, White Paper, Bandwidth, Latency

The CBNL team

The rapid adoption of data services has forced the wireless industry to rethink the way that mobile services are delivered.
Compared to voice, data requires significantly more traffic and must be delivered at a much lower cost per bit in order to be profitable for operators.

Standards are constantly being worked on that address these evolving network requirements and CBNL experts have put together a white paper, in which they take a look inside the new X2 interface in further detail.

The paper analyses the tasks it needs to perform in today’s networks as well as future LTE-Advanced networks.

For these tasks we consider the implications for bandwidth and latency, and develop requirements for the backhauling of X2.

Read the white paper.