In a highly dynamic technological sector led by powerful multinationals that put considerable emphasis on branding, international presence and acceptance of any kind for small and medium-sized businesses is both noteworthy and appreciated.
This week I have written the article below to be published in “ComunicacionesHoy”, a well known telecommunications magazine and online media website in Spain. As the article has only been published on ComunicacionesHoy in Spanish, Teldat has translated it into English so that all our non-Spanish speaking social media followers can also read it. I hope that you enjoy my article.
Even though, at first glance, opting for Internet access lines to build corporate networks seems like a sound decision cost-wise, the truth is this choice brings far more benefits when it comes to control, flexibility and user-friendly management.
By focusing on this much wider point of view, carriers have the opportunity of putting together a very attractive offer to ensure their clients benefit from the enormous range of possibilities this type of network has to offer.
Imagine we undertake a survey, asking CIOs of large companies with a wide network of offices or remote points (banks, insurance and travel agencies, distribution chains, etc.), what they would want to improve in their company communication network. Requests for a wider bandwidth would certainly arise, together with the possibility of selecting the network/access technology that best suits their remote needs, the use of multiple, efficient, redundant and simultaneous unfettered access, higher network intelligence to dynamically adapt to real time communications and the full atomization of operating tasks (to name but a few). Moreover, all respondents will ask to pay a fraction of the price they are currently paying (for communications based on MPLS networks) without compromising either SLA or security.
Sounds too good to be true? It isn’t, as you’ll see
The answer to CIOs’ prayers is SD-WAN, a communications architecture made up of different pieces of technology (some new, others not) that is able to produce a synchronized performance capable of satisfying the most exacting of CIO aspirations. The SD-WAN base is made up of internet lines and a further layer providing SLA said accesses don’t have, obtained from traffic engineering over several internet links or by maintaining MPLS access (using far less bandwidth for critical corporate traffic). SD-WAN is essentially made up of the following:
a) Virtual Private Network (VPN) over any IP access, MPLS or Internet, offering complete freedom to select your access technology (fiber, DSL, LTE, etc.), the highest security and without limiting the number of accesses used at remote points.
b) Traffic selection, in order to identify the applications that use the network and apply different policies (depending on the criteria of each application in relation to business).
c) Real time quality analysis of the access to remote divisions, based on traffic monitoring and usually through synthetic traffic (polling).
d) Network intelligence that makes it possible to dynamically adapt different applications over different accesses, depending on the policies defined for said applications and the state of the accesses.
e) Visibility of network behavior with respect to applications and the use of said accesses.
f) Centralized network control, which permits unified global parameterizing of behavior and automated provision of remote point elements.
Actually, much of this technology isn’t new. Applying traditional techniques you can, for example, use internet lines as an access method. Secure VPNs can then be employed to balance applications (depending on their granularity or access status), while obtaining greater visibility on network usage. However, implementing such a network using traditional methods would be a Herculean task! Given how complex it is to configure separate network elements so that they operate as a single network, just attempting it would be crazy. Here however, is where the SD of SD-WAN networks really comes into play.
SDN (Software Defined Networks) have clearly demonstrated their value in Data Centers, integrating different systems, automating both management and service chains, providing a virtual view of the network thus enabling global management from a single management point.
This same idea, applied to WAN, is what holds together the different pieces of this puzzle with apparent simplicity. While complexity still exists, it is hidden under a layer of abstraction that facilitates both the implementation and management of an SD-WAN network. Like SDN management, SD-WAN supports the simple, unified and central parameterizing of network behavior to adapt to new applications or modify existing application policies.
In this context, three SD-WAN product supplier groups have emerged: one being companies evolving from the connectivity sector towards SD convergence; a second, in direct contrast, proposing consolidated SDN solutions and extending these solutions towards the wide area network, and finally a third group, involving start-ups specifically focused on SD-WAN. Whatever the case however, analysis of this solution must be rigorous and should, at the very least, keep the following in mind:
a) To use open standards/protocols to ensure the network is not reduced to a single supplier.
b) Scalability for both network design and speed.
c) Capacity to cover network terminator features to unify access and SD-WAN features.
d) Appropriate traffic granularity to ensure the balance of applications (complying with business parameters) without compromising network performance.
e) Active polling to check the health status of network accesses regardless of traffic.
f) Centralized management tools for unified network design, provisioning and management.
g) Cost of network elements.
SD-WAN is still in the early stages of development, the real number of implementations being low. However it’s definitely on the radar for the majority of IT departments who are planning network migration. Significant SD-WAN growth is expected in the near future, firstly as a complement to MPLS and, in the long term, as an alternative and preferred network.
System fine tuning is an unavoidable necessity that requires knowing exactly what happens within a system. For instance, it is almost impossible to adjust a combustion engine without having detailed information on the revolutions, temperature or compression, etc. Said information is vital for any system, be it Smart Grid, air traffic or even our own organism.
In IP networks, real time information, more accurately known as visibility, is essential for smooth efficient system operations together with network and system dimensioning. This involves tasks such as problem diagnostics, analyzing communication links, or nodes, for congestion issues during peak traffic or, in more complex cases, detecting traffic abnormalities (viruses, worms or other cyber-attacks).
To achieve IP network visibility, remote systems (routers, switches or polling) generate real time reports, analyzing packets (to detect layer 3 and 4 sessions) and accounts statistics per session, and periodically deliver these to a reception collector. This compatible feature appears under various brand names (Netflow from Cisco, Rflow from Ericsson and NetStream from 3Com/HP/Huawei, etc.), but is more commonly known as Netflow (not to be confused with SFlow, which, although similar, analyzes a portion of the total, unclassified, traffic and flows for statistical purposes only).
The latest Netflow release (9) provides in-depth information on each session, with up to 71 different parameters per flow (bytes, packets, addressing, protocol, TOS, interfaces, Autonomous System, next hop, VLANs etc.).
With the aim of both standardizing and improving Netflow, IEFT published the first IPFIX (IP flow Information Export) standard in 2008. IPFIX is basically Netflow with extensions, which maintain protocol essence together with Netflow information formatting (including Netflow v.10), aggregating more data and exporting proprietor parameters with each implementation. IPFIX thus opens up a world of possibilities, such as mail servers delivering data on source/destination addresses, subject, attachments and bytes, or webpage servers exporting flow records on viewed pages, browsing time per page and even access history from other countries.
With IP networks, Netflow provides visibility up to layer 3 and 4 (ports and protocols) while IPFIX extension potentially goes much further, providing data on applications through Deep Packet Inspection (DPI) for example.
Application visibility is becoming more prominent as networks evolve from transport based architectures, mainly MPLS where applications are interlocked, to applications based architectures. This is where the transport network adapts to them, rather than the other way around, with new architectures based on MPLS and/or Internet hybrid accesses materializing. Said networks are known as Hybrid or SDWAN (Software Defined Wide Access Network).
These application-orientated networks have arisen, in part, to accommodate the wave of new on-demand (client) services, such as mobility and BYOD, video, Internet of Things (IoT), cloud applications, etc. Tools providing application visibility means greater network comprehension on applications, particularly those over HTTP/HTTPS, the latter being a universal application support platform (HTTP, new TCP).
How to tackle visibility in new Hybrid Networks
a) Distributed Intelligence: Activating DPI and IPFIX in remote routers.
While DPI has all the advantages of source flow information, necessary to determine an application, it unfortunately has an enormous impact on resources in each and every network router .
b) Centralized analysis: Through Netflow and intelligent collector.
A priori, collectors only have session statistic data available and, therefore, less information than distributed intelligence. However, methods such as reverse name resolution and heuristic algorithms (which provide very similar information to the above case), a good example being the Polygraph collector (a locally developed high-tech product), allow far greater use of network resources and scalability as well as the necessary data. An advantage is definitely the fact that Netflow has far less impact on routers so, practically speaking, this is an excellent choice.
Future of Hybrid Networks
The evolution of Hybrid, or SDWAN, networks to pure SDN, will bring about direct dynamic control over network behavior for applications, where visibility will play a vital role. Teldat has a wealth of experience on implementing hybrid networks and visibility, providing our clients with versatile networks, not just effective for today, but also for the future.
We live in a digital world. Entertainment, work, information, social relations… today everything is digital. The benefits are obvious. Digital information is much easier to store, transfer and handle than analog and is more powerful. If we think about it we can find many fields where digitalization has had a remarkable impact. In this article, however, we will only consider the impact on telephone networks.
Regardless of whether the telephone was invented by Alexander Graham Bell or Antonio Meucci (or…), it is clear that it started out as analog, and it remained so for many years. Logically, improvements were made over the years but being inherently analog in operation until the mid-60s, deficiencies in the quality of transmitted voice were inevitable. This was especially the case over long distances that required signal regeneration at intermediate stages, leading to information loss and the introduction of noise. The digitalization of the telephone network was a breakthrough in this regard, since the digital signal is transmitted unchanged regardless of the distance and of the intermediate stages required between sender and receiver.
Integrated Services Digital Network (ISDN)
While the move to a digital network paved the way for its use with a range of other services in addition to voice, the final leg, the last mile, also needed to be digital. This step took place many years later with Integrated Services Digital Network, ISDN. As the name suggests, ISDN allows different services to be used over the telephone network on a single line, digital of course.
The advantages of ISDN are clear: firstly, the sound quality (which is why even today they are still widely used by the radio industry), secondly, the extra features (rapid call setup, support for multiple terminals on the same line or direct inward dialing and caller ID), and thirdly, the additional services such as data or video transmission.
ISDN was introduced by CCITT (ITU-T) in 1988 and had its golden moment during the 90s, being deployed with varying success in countries around the world such as Japan, Australia, India and the United States. The biggest impact was in Europe, however, in countries like Norway, Denmark, Switzerland and above all Germany, which had 25 million channels (29% penetration) and one in five lines installed worldwide.
In the late 90s and early twenty-first century two events mark the decline of ISDN; on the one hand, ISDN cannot keep up with market demands for greater speed, and on the other, the cost of Digital Signal Processors (DSP), which allow more advanced line modulations, lowers significantly. It is the beginning of ADSL and the decline of ISDN.
ISDN, the new paradigm in communications
During the first decade of the twenty-first century, ISDN gradually loses ground to ADSL and from 2010 all ISDN service carriers gradually announce its withdrawal. In 2010, for example, NTT announces its intention to migrate all ISDN phone lines in Japan to IP technologies, in 2013 Verizon decides not to install anymore ISDN lines in the USA and in 2015 BT announces its intention to discontinue the network in the UK. Curiously, however, Deutsche Telekom (DT) in Germany adopts the most aggressive stance. By far the world’s largest ISDN provider, it has already begun migration to ADSL/IP technologies having set an aggressive horizon of 2018 for cutting off ISDN completely.
All carriers with active ISDN networks will no doubt be following the transition of the German DT network very closely and it will likely mark the way forward. DT’s commitment is to network modernization and improving customer service while minimizing the impact on the customer. The proposal, therefore, is to offer data services and voice over IP on the same telephone line (ADSL/VDSL) but at the same time giving the customer the opportunity to keep their existing ISDN infrastructure, emulating the ISDN lines from the EDC to their current ISDN PBX.
The use of xDSL and IP services allowing the customer to maintain their internal ISDN infrastructure practically eliminates any impact on the customer, who controls the evolution of the network to an integrated and up-to-date service.
This is an ambitious project and key for Deutsche Telekom. For this reason, following a rigorous selection process, the company has forged close relationships with partners who have proven ability in providing the solvency, experience and agility needed. Within this framework, Teldat has been entrusted by Deutsche Telekom with the task of supplying the access devices.
The performance, power, energy consumption and useful life of routers are all measurable factors in product testing. The data, however, depend on the conditions under which manufacturers carry out the tests. Transparency is vital!
After 10 years and more than 400,000 km, it is finally time to say goodbye to my little Citroen Xsara. Who brought up planned obsolescence? It is with a degree of nostalgia that I say goodbye because, to be honest, the car has given me more joy than trouble. One aspect I personally value when looking for a replacement is reduced consumption and with 40,000 km per year it certainly needs to be kept in mind. The first step, of course, is to get hold of official manufacturing data. In so doing, the technological advances in data publishing of recent years are appreciated: quickly, we find consumption of less than 5 liters per 100 km in mid-size cars with large engine power and even 4 liters per 100 km or less, if we are willing to make do with less powerful engines that just make 100 horsepower. And that’s without even considering electric or hybrid vehicles; technology is most welcome! The second step is the Internet. Specialized websites with comparisons, trends, experiences, testing, user forums… Sometimes too much information can lead to misinformation. But there is a certain unanimity when it comes to consumption: official manufacturing data are usually lower (sometimes significantly) than what users report. Well, we haven’t discovered penicillin here, I think it’s something in the public domain. So… are we being deceived by manufacturers? Here I believe that my impression as a user is also shared by most: manufacturers are probably not deceiving us, they are only varying “the conditions”.
The same can be extrapolated to numerous cases where products are rated by quantitative data (a household appliance’s consumption, a mobile phone’s battery life, a device’s expected life span…). This is self-evident but the conditions under which the data are obtained are almost as important as the data itself, though this is not always given the attention it deserves.
The function of product testing
In telecommunications, two factors determine the validity of the access router, especially with regard to remote access to a corporate network. One of these factors is qualitative, the available interfaces and functionality, while the other is quantitative, the speed at which the device is capable of exploiting them. The first factor has often been a determining factor of the second; read, for example, the transition from networks based on serial lines (X25, FR, PP…) to ISDN and subsequently ADSL, VDSL, and finally to Ethernet and fiber connections (Giga).
In virtually all cases the conversion of the access method determined the speed thereof and thus the power required of the device. However, only a small fraction (usually 100 Mbps, or 10%) of the capacity of typical connections in central offices (Ethernet and fiber) is currently being exploited, leaving a long way to go. Thus the second factor (the power of the device, be it speed, performance, capacity, throughput…) takes on a key role in determining the suitability of the product for expecting a reasonable period of useful life. At the same time it is clear that an appropriate level of power ensures adequate performance during this lifetime, both of which impact both the business development and the income statement.
Unfortunately, Internet resources collecting user information on professional routers are much more limited than those for vehicles, mobile phones, and household appliances. Often, the only option is to rely on the data published by manufacturers. And this is what I was leading up to because, as product manager, I am very familiar with the problem…
How do you measure a router’s performance?
The conditions under which an access router’s maximum performance is determined are absolutely crucial and have a far greater impact on the outcome than in the case of fuel consumption which I spoke about at the beginning of this article. Hence, apart from the information on XXX Mbps supported by a router, it is important to specify, among other things, whether the data is unidirectional or bidirectional (whether XXX Mbps are supported in only one direction or both), since this clearly has a 100 percent impact on the published value.
Another important factor is the packet size used in the test to obtain the XXX Mbps. This is because a packet’s switching load is independent of its size, or to put it another way, the power of a device is determined by the number of Packets per Second (PPS) that it is capable of processing. Thus, a test with 100 byte packets will give one result, while the same test with 1500 byte packets, will produce a figure that is 15 times higher.
Finally, another important circumstance is the configuration loaded in the router, which can also have an effect that is just as important as the others.
In order to avoid these problems, a test pattern under given conditions has been standardized, defined in RFC2544 and RFC6815. In an ideal world, manufacturers would be able to use these standards and compare the published data directly without any uncertainty. A slight downside to these tests is that they don’t provide a single result, but rather a set of results obtained from a set of conditions. But that’s another story for another day.
Energy efficiency: powerful, high-performance routers
The performance of the Teldat routers is usually far superior to similarly-priced competitors’ routers. Sometimes, four or five times more powerful.
Furthermore, we try to be as objective as possible in providing performance data, always indicating bidirectional information (the data indicated on each side simultaneously), using IMIX packet size (statistical average of the Internet traffic) and a configuration of average complexity (ACLs + QoS), i.e. under conditions similar to the real-world so that there are no surprises as in the case of gasoline consumption…