The push for mission-critical LTE interoperability

The shift to mission-critical LTE will only be a success if it is underpinned by a fully interoperable ecosystem. Sam Fenwick looks at the efforts that are being expended to make this a reality

Change is the only constant. In the rapidly evolving world of telecommunications, nothing lasts forever, with the past decade filled with stories of mergers, acquisitions and the like. As generations of technology come and go, brash new upstarts pop up to challenge the established order and the strength of industry giants can wane. After all, it can be argued that it is much easier to develop and launch a disruptive new solution if it doesn’t threaten your own well-established business models and product lines. At the same time, politics can unexpectedly crash into business, as recently demonstrated by the recent swirl of controversy over the use of some Chinese telecoms for critical infrastructure in the US and Australia.

Given this and the fact that public safety operators make investment decisions that will have consequences for decades, they need the infrastructure and devices they buy to work with those from other vendors. The widespread use of interoperable equipment also works to disincentivise incumbent providers from raising their prices without good cause, given that doing so might push their established customers to look elsewhere.

While interoperability is a vital part of today’s mission-critical ecosystem, it hasn’t sprung up by magic. Much hard work goes into making it a reality, first through the creation of the necessary open standards, then testing to make sure that vendors’ interpretations of the standards don’t create issues, and then finally through the certification of the devices and equipment (this article will focus on those last two points). This work is always ongoing, but has recently had to step up a gear, given the community’s focus on eventually switching from PMR to LTE.

Gotta test ’em all

One of the key activities in this regard is the series of MCPTT (soon to be referred to as MCX, given the inclusion of MCData and MCVideo) plugtests being run by ETSI. The second MCPTT Plugtests Event took place in June in Texas, with the support of TCCA, NIST/PSCR, PSTA and the European Commission, while observers included AT&T, Verizon Wireless, the UK Home Office, the US Department of Homeland Security and the French Ministry of Interior. The event consisted of 100 test cases (resulting in more than 300 combinations), and more than 2,000 tests were executed with a 92 per cent success rate (up from the 85 per cent seen with the first event in 2017).

“The roughly 47 test cases that were included in the first plugtest were also used in the second plugtest, and this really helped to increase the success rate,” said Saurav Arora, project manager (MCPTT plugtests & 3GPP WG CT3) at ETSI. “For those companies that were already passing these test cases, it [the second plugtest] was a kind of regression testing for them. The companies that attended both plugtests spent the majority of their time during the second plugtest on the test cases that they were failing in the first plugtest.”

Arora adds that this, coupled with the vendors’ greater experience with MCPTT and the fact that those failures that occurred in the first plugtest due to issues with the 3GPP test specification had been subsequently addressed by 3GPP, were the main factors responsible for the second plugtest’s higher success rate. He explains that a similar procedure is taking place in the wake of the second plugtest – “some clarifications are still needed from 3GPP, so it will be the same [procedure]; we will send a list to 3GPP Working Group CT1 and SA6, and they will let us know if it’s an issue that’s due to our [interpretation of the standard] or it’s an issue in the test specification”.

“Everyone is always fond of high success rates. For me, the value in the plugtest is in the failures. The more failures we have, the more bugs we discover. It’s better to fail now during the plugtests than in the field,” says Harald Ludwig, chair of TCCA Technical Forum. “From the first and from the second plugtest, we can say that the quality of the 3GPP standard is very high. If I remember back to when we first did the interoperability testing for TETRA, we had more issues than we have now with MCPTT, MCData and MCVideo. The quality of the standard is really good.”

Ludwig adds that TCCA is very satisfied with the overall level of participation from industry in the plugtests, though he notes that while 31 vendors participated (up from the 14 that participated in the first event), there are more that could attend, such as those based in Korea, and he says “we’re confident the others will join as well”.

Arora notes that the popularity of the plugtests has created one issue – as the number of vendors rises, the number of possible testing combinations increases exponentially. While this is of course a good thing in that it means the testing more closely resembles the real world, it creates a headache for the organisers in that it becomes harder for every vendor to test with all the other vendors in the time available.

This, he says, is why ETSI has opted to follow the second plugtest with a remote-testing-only plugtest (which will focus on those test cases that don’t require the testers to be face-to-face and can be done over a VPN), followed by a fourth plugtest where the vendors will meet to work on the test cases that can’t be done remotely; ie, those that require LTE infrastructure or user equipment. Ludwig adds that this approach also addresses a request from some of the vendors for more frequent testing opportunities (as opposed to having to wait a year between testing sessions).

Arora says preparation for the third plugtest event will take place in October and November, with the remote sessions taking place in December and January. Registration for the fourth plugtest will open in January 2019, preparation for it will take place between February and May and the event is planned to take place in June, with a venue (likely to be somewhere in Asia) to be confirmed by the end of 2018.

One of the areas that will be explored in greater detail in future plugtests is MCData and MCVideo functionality. Arora says that little testing in this regard took place in the second plugtest, with much of it focusing on file distribution and the Short Data Service (SDS) mechanism. “However, as the second plugtest was based on the Release 14 test specification, this was an improvement on the first, which was based on Release 13.”

He adds that the resilience of the MCVideo service in terms of accommodating packet loss is planned to be tested in the fourth plugtest event. Part of the reason why the test specifications for MCData and MCVideo are still at an early stage is because “there are still a lot of things that are pending from a 3GPP point of view. We have recorded quite a few issues, which some of the vendors who participate in 3GPP have raised, particularly with regard to MCVideo transmission control, and we expect that once these issues are resolved we will cover MCData and MCVideo in more detail.”

Another area in which the interoperability testing will have to wait for the standards work in 3GPP to be completed is PMR/LTE interoperability. “This has experienced some delays in 3GPP and we expect that it might be completed in December this year or March next year,” says Arora. He adds that writing the specification has been challenging for the companies involved and expects that once it is complete, the nature of the plugtests events will change – “it’s almost a different plugtest when you bring in all the TETRA and P25 vendors”. Given the above, he expects that the MCX plugtests programme will run for quite some time. To make it easier for the participating vendors, the NDA that they will sign for the MCX PlugTests Programme (as the event series will be called from the third event onwards) will come into effect for all the consecutive events.

There is also the question of whether the plugtests will assess the interoperability of ProSe, the feature that is intended to be a substitute for TETRA’s Direct Mode (DMO), which allows for back-to-back communications in the absence of network infrastructure. While Ludwig notes that the ProSe standard has been ready for some time, he says that he has only heard rumours of one device manufacturer supporting it, though even if that is the case, interoperability testing cannot take place until there is more than one device available that supports ProSe.

Device certification

As we have discussed, the objective of the MCPTT plugtests is to identify areas where vendors might be interpreting the standards in a different way or areas of ambiguity in the standards, so that these can be resolved, thereby removing any barriers to interoperability.

However, there is also a need for device certification, as Chris Hogg, Global Certification Forum (GCF) programme manager, explains. “Certification from an industry organisation provides a benchmark against what has been agreed by the industry as a whole, rather than just competing products. So in the case of MCPTT, a plugtest will only look to verify the feature and not the whole device. Moving a product or service from development to production or deployment with industry certification does reduce costs for everyone involved and provides testing against independently validated test equipment.”

The GCF is an independent body whose device certification process is used by the wider tele­communications industry. Back in January, it announced that it would extend its certification scheme to LTE-based critical communications devices and that a new Work Item covering MCPTT over LTE was approved by GCF’s Steering Group in December. Hogg says this work item is currently in development and that a prioritised conformance test case list has been approved, with the next stage being for test equipment vendors to get these tests validated on their test equipment. Once a suitable level of validated test cases is available on commercial test equipment, GCF will include this feature in its certification.

He adds that “a key challenge is how to encourage the test industry to invest in developing and implementing these MCPTT tests in their products”, and says the timeframe in which the GCF’s MCPTT certification scheme will become available “will depend on suitable test and equipment manufacturers coming forward to validate their test equipment”.

TCCA’s Ludwig provides more detail: “Test equipment manufacturers don’t seem to be very interested in implementing the MCPTT scripts on their machines.” He adds that these scripts are required for the GCF device testing and certification process. An alternative to the automated testing with the scripts would be labour-intensive and time-consuming manual testing, which would also provide the required confidence that a device’s implementation is MCPTT-compliant.

Hogg says the GCF is addressing this challenge by working with TCCA to raise awareness of the importance of using GCF-certified devices in mission-critical networks, thereby aiding the business case for the test industry to invest in this new feature. “We are currently very involved with TCCA and are looking at how we can raise awareness throughout the mission-critical industry regarding the importance of using GCF-certified devices in mission-critical networks and services,” Hogg adds. The two organisations have updated a memorandum of understanding (MoU), which commits them to work together in the interest of their respective members in areas related to 3GPP mission-critical services.

Hogg explains: “GCF is a self-certification process so the manufacturer can assess the conformity of the product to our certification criteria. Once involved, the scope of what’s required for certification is very clear. MCPTT is to be part of regular GCF certification, meaning that a device must meet all certification requirements for the technologies, bands and features it supports.”

He adds: “GCF is market-driven based upon inputs from its members. When test specification work has been completed in 3GPP, it is anticipated that members will add the MCData and MCVideo features to GCF certification as these are important features for critical communications. Exchanging video and data between emergency scenes to control rooms means that human life can depend on the reliability of the equipment and network connection. So again, by bringing this into scope, the entire system is covered (eg, radio, mobility management, LTE protocols, etc).”

In its announcement in January, the GCF highlighted the way that mobile virtual network operator (MVNO) structures “are being considered for several national, regional and municipal critical communications services that are being deployed on dedicated public safety and/or commercial LTE networks” – ASTRID’s Blue Light Mobile service being one of the better-known examples. The GCF has extended its membership categories to include MVNOs, which it says “will make certification accessible to MVNOs including public safety network operators”.

Hogg says GCF membership provides MVNOs with access “to a plethora of information on devices that are certified and suitable for mission-critical applications. We have ample facts and figures that prove our members’ products have up to 20 per cent fewer dropped calls than non-certified products, and in an emergency situation, where human life depends on it, communication simply cannot fail.

“GCF membership will enable mission-critical operators to ensure that relevant bands and features are included in the certification programme and that everything possible has been done to guarantee the delivery of the highest quality service for users.”

What can the wider industry do to support the work of the GCF? “The more organisations that participate and help to define certification for the mission-critical ecosystem, the more awareness we will generate,” says Hogg. “We need to demonstrate the advantages in terms of quality and service reliability of ensuring the interoperability of mission-critical devices with the networks they use.

“Given that often life is at stake, it is paramount to ensure mission-critical users can rely on the network and service. A core part of ensuring this experience is to check the device correctly interoperates with the network/service. GCF certification – already trusted for almost 20 years by the mobile industry – enables this check to be performed. We would strongly encourage the emergency services network operators, device suppliers and the test industry to join GCF, certify their products and use GCF-certified products in their networks as part of a wider strategy to build reliable, high-quality services for critical communications users.”

As we have seen, the efforts to ensure that the mission-critical community has the interoperable LTE-based equipment and services that it requires are well under way, but as new features become standardised in 3GPP, more work needs to be done, and it is to be hoped that having come so far, the industry doesn’t falter as we enter the final furlong.