Without going into technology specifics such as the structure of a memory cell, the distinguishing characteristics of DRAM versus SRAM (static RAM) are basically twofold: (1) the full address is usually presented to SRAM just once, while it is multiplexed to DRAM, first the row and then the column; (2) DRAM also needs to be refreshed periodically to maintain the integrity of stored data.
Firstly, the move of IT infrastructure to the cloud means our current understanding of level 3 network traffic (IP) is insufficient to characterize applications transmitting over said network: Application servers had fixed, known IP addresses in traditional data centers, whereas IP addressing in cloud is no longer controlled by the organization using these services.
Secondly, far more applications (both corporate and personal) are in circulation today than a few years ago. Said applications have not, in general, been designed with bandwidth optimization in mind and all have different needs and behaviors. This means some applications can (and do) adversely affect others if the network is incapable of applying different policies to prevent this.
The vast majority of applications use http and https for communication mainly to evade, or minimize, possible negative effects arising from security policies or IP addressing (NAT) over the network. This means the transport layer (TCP or UDP port) is unable to adequately identify network applications as they tend to use the same ports (http 80 and https 443).
To further aggravate the problem, companies must provide connectivity to an enormous array of ‘authorized’ local devices. Remote local networks today, unlike the traditional single terminal of yesterday, are more varied and far less controlled: Wireless offices, guest access, home access, BYOD, IoT etc. Consequently, the difficulties in analyzing traffic, caching systems and CND also escalate
Finally this greater diversity increases security risks: viruses, malware, bots, etc. These, in turn, tend to generate “uncontrolled” network traffic that needs to be detected and characterized. At this point, the close link between visibility and security at the network level raises its head (with all its repercussions and analysis), a subject that we’ll tackle another day.
The above points make it very clear that analyzing network traffic has become more and more intricate over the last few years, boosting the need for new tools with greater capacity. Otherwise, we simply won’t know what is going through our network, placing it not only at risk but unnecessarily increasing its upkeep. Given the tremendous amount of information handled, using tools that are able to intelligently filter the information received and provide high level of granularity in analysis and reports is absolutely essential. It’s here where big data analysis technologies bring huge advantages when compared to traditional tools.
Well aware of this recent difficulty, users need application visibility and control solutions to meet these new needs.
- Said solutions must be able to scale down to small and medium corporate offices, and offer a sound compromise between CPU requirements (cost), needed for DPI (Deep Packet Inspection), and number of detected applications (customer service and quality of application detection).
- Integrating intelligent detection in remote routers and the use of a centralized management tool, versus current market solutions based on proprietor remote point polling and hardware appliances (also proprietor), allows for excellent detection granularity and affordable exploitation, scalable to any size of network.
- Instead of opting for proprietor solutions, it’s crucial to use suppliers who adopt standard protocols to communicate visibility information (Netflow / IPFIX for example). This allows customers to use their own information collection methods if they so wish.
As part of its access routers and management tool, Colibri Netmanager, Teldat offers visibility and control solutions for network applications capable of meeting the aforementioned market needs.
Customer branch offices frequently find themselves with limited resources for WAN connectivity, this being a common problem associated to communications. WAN optimization under these circumstances makes sense, as the rise in productivity results in improved end customer perception.
Through different techniques, optimization reduces both infrastructure and bandwidth costs, allows for centralized services and simplifies periodic traffic congestion management over WAN.
The most common WAN optimization solutions are compression, caching or improving TCP efficiency. These are usually implemented in two parts: using a remote hardware/software module at the branch office and a second module, correctly dimensioned, at the head office. By adding an intelligence layer, we achieve efficient management and monitoring of said system and, consequently, a functioning multi-office environment.
These solutions typically need specific components only dedicated to optimization, which come at a significant cost per office.
One of the most popular optimization options, given its simplicity and because it only draws from the resources of a remote office, tends to be a webcache service (since any analysis or characterizing of branch traffic almost always shows that the web uses a large percent of the available bandwidth).
Integrated optimization and communications
Currently, the market offers products that can equally execute said applications and act as the office communications router.
Both the core routing and the applications use device hardware, assigning or reserving resources for each purpose, and the whole is integrally managed from a single tool, which not only controls connectivity but also manages the application life cycles: installation, configuring, monitoring, updating, etc.
An alternative (in branches) is to separate the optimizing services from communications, which results in the need for different hardware with the additional drawback of increasing operation costs. This option can really only be justified where bandwidth availability takes the highest priority.
First optimization solution: Webcache
A webcache captures internet traffic requests, forwards them, and stores a local copy of the received response. This latter information is then readily available to the users.
You may think that local branch users would have different information requirements. While this may be true in some cases, on the whole a group of users tends to repeatedly ask for the same data and at different times. Consequently, the webcache service can represent an important saving in bandwidth within sectors such as banking, legal services, insurance companies, education, etc.
The resulting saving of bandwidth means other critical services can now use the available WAN resources for more than just applying QoS policies. In addition to the reduced traffic over WAN, saving bandwidth accelerates information availability and allows for the implementation of traffic filtering and prioritization policies.
While it’s obvious that embedded webcache software in communication devices does compete, to a certain extent, with dedicated hardware products and is not designed for a high volume of data (or even for a high number of users), there is no doubt it is both an economic and flexible solution, which customers should bear in mind for their small and medium sized offices, as frequently it only implies the purchase of application licenses.
Other optimization options
Once in the world of optimization through embedded applications, it’s easy to add additional product licenses for branch use, which further increase bandwidth saving. For instance:
–Video broadcasting applications: by concentrating local requests into a single petition, streaming the video traffic from different clients to the WAN and dividing the necessary bandwidth by n, you can remotely attend a presentation or a corporate event.
–File server application: this behaves as an NAS for branch users. The content from said NAS can be easily programmed to download during periods of WAN inactivity, such as weekends and at night.
–Bootserver application: this boots the network stations and provides both the operating system and appropriate configuration for startup.
To wrap up…
The world of WAN optimization, essential for availability, is far from irrelevant as, together with the increasing frequency of high speed lines, it’s fast becoming a critical need for branch offices to offer additional and crucial services to their clients.
Teldat has all the essential solutions for these scenarios, and continually offers innovate and more flexible products with greater processing capacity to meet all customer needs now and for the future.
System fine tuning is an unavoidable necessity that requires knowing exactly what happens within a system. For instance, it is almost impossible to adjust a combustion engine without having detailed information on the revolutions, temperature or compression, etc. Said information is vital for any system, be it Smart Grid, air traffic or even our own organism.
In IP networks, real time information, more accurately known as visibility, is essential for smooth efficient system operations together with network and system dimensioning. This involves tasks such as problem diagnostics, analyzing communication links, or nodes, for congestion issues during peak traffic or, in more complex cases, detecting traffic abnormalities (viruses, worms or other cyber-attacks).
To achieve IP network visibility, remote systems (routers, switches or polling) generate real time reports, analyzing packets (to detect layer 3 and 4 sessions) and accounts statistics per session, and periodically deliver these to a reception collector. This compatible feature appears under various brand names (Netflow from Cisco, Rflow from Ericsson and NetStream from 3Com/HP/Huawei, etc.), but is more commonly known as Netflow (not to be confused with SFlow, which, although similar, analyzes a portion of the total, unclassified, traffic and flows for statistical purposes only).
The latest Netflow release (9) provides in-depth information on each session, with up to 71 different parameters per flow (bytes, packets, addressing, protocol, TOS, interfaces, Autonomous System, next hop, VLANs etc.).
With the aim of both standardizing and improving Netflow, IEFT published the first IPFIX (IP flow Information Export) standard in 2008. IPFIX is basically Netflow with extensions, which maintain protocol essence together with Netflow information formatting (including Netflow v.10), aggregating more data and exporting proprietor parameters with each implementation. IPFIX thus opens up a world of possibilities, such as mail servers delivering data on source/destination addresses, subject, attachments and bytes, or webpage servers exporting flow records on viewed pages, browsing time per page and even access history from other countries.
With IP networks, Netflow provides visibility up to layer 3 and 4 (ports and protocols) while IPFIX extension potentially goes much further, providing data on applications through Deep Packet Inspection (DPI) for example.
Application visibility is becoming more prominent as networks evolve from transport based architectures, mainly MPLS where applications are interlocked, to applications based architectures. This is where the transport network adapts to them, rather than the other way around, with new architectures based on MPLS and/or Internet hybrid accesses materializing. Said networks are known as Hybrid or SDWAN (Software Defined Wide Access Network).
These application-orientated networks have arisen, in part, to accommodate the wave of new on-demand (client) services, such as mobility and BYOD, video, Internet of Things (IoT), cloud applications, etc. Tools providing application visibility means greater network comprehension on applications, particularly those over HTTP/HTTPS, the latter being a universal application support platform (HTTP, new TCP).
How to tackle visibility in new Hybrid Networks
a) Distributed Intelligence: Activating DPI and IPFIX in remote routers.
While DPI has all the advantages of source flow information, necessary to determine an application, it unfortunately has an enormous impact on resources in each and every network router .
b) Centralized analysis: Through Netflow and intelligent collector.
A priori, collectors only have session statistic data available and, therefore, less information than distributed intelligence. However, methods such as reverse name resolution and heuristic algorithms (which provide very similar information to the above case), a good example being the Polygraph collector (a locally developed high-tech product), allow far greater use of network resources and scalability as well as the necessary data. An advantage is definitely the fact that Netflow has far less impact on routers so, practically speaking, this is an excellent choice.
Future of Hybrid Networks
The evolution of Hybrid, or SDWAN, networks to pure SDN, will bring about direct dynamic control over network behavior for applications, where visibility will play a vital role. Teldat has a wealth of experience on implementing hybrid networks and visibility, providing our clients with versatile networks, not just effective for today, but also for the future.
While VDSL2 and vectoring have already been launched as a service, the technology G.fast can get even more bandwidth out of copper cables. However, this new technology which was introduced at the end of last year by ITU, also has its drawbacks.
Along with the two standards G.9700 and G9701, ITU has approved a further bridge technology for the “last mile”. This technology is expected to offer the end customers high-speed data via copper cables which has been so far only possible via fiber optic connection. Like VDSL2 and vectoring before, G.fast enhaces again the possible bandwidth based on copper cables, however, only on very short distances in a satisfactory manner.
G.fast derives from the Recommendation ITU-T G.fast-psd and stands for “Fast access to subscriber terminals (FAST) – Power spectral density specification”. Currently, this technology is expected to provide the end user with bandwidths of up to one Gigabit per second. Up and downstream, of course, have to share bandwidth.
Susceptibility and coexistence with DSL
G.fast uses also higher frequencies than other previous standards. While VDSL bandwidth is up to 30 MHz, G.fast has initially a bandwidth of 106 MHz – the doubling to 212 MHz is already planned. However, high frequencies also may cause issues concerning susceptibility and coexistence with already existing xDSL connections.
Operating VDSL2 and G.fast in the same broadband bundle simultaneously is comparatively easy. In order to avoid a crosstalk during the transmission of both standards, G fast has to use higher frequencies than VDSL2. In addition to the start frequencies of 2.2 MHz and 8.5 MHz, ITU defines therefore also the entry points of 17,664 MHz and 30 MHz.
Dampening effects limit valuable bandwidth
Apart from crosstalk, G.fast struggles with dampening effects which limit the length of cables. According to ITU, only lengths below 100 meters allow data rates between 500 and 1,000 Mbit/s. 150 Mbit/s still remain with a lenght of 250 meters. Thus, this technology is suitable as an addition to FTTP and FTTdp (Fibre to the Building/distribution point) networks.
Series production not yet predictable
It will take some more time until G.fast will be put into practice. So far none of the large carriers has concrete plans concerning the introduction of this technology. However, the constantly increasing demand for bandwidth makes this technology a very promising and interesting topic and will continue to draw our attention in the future.
As an innovative manufacturer of network routers Teldat of course is engaged with this subject and is looking forward to the future developments.
Schemes to change network infrastructure should always be linked to improving user perception on the quality of the service received. Prior to starting such a project, there are both technical and commercial proposal phases and evaluations that require approval.
Once this has been achieved, the main stages are:
- Operation and management.
- The break down or transition of the network to a new project.
The deployment stage is where operations involving startup/installation of devices are carried out; the end result being the contracted service. This project stage is where:
1. The agreed service deadlines and delivery must be met.
2. Tools, simplifying the foreseeable mass configuration tasks, must be available.
3. A team of specialists must oversee the installation of devices, at a previously agreed location.
4. Validation of the entire startup process, so the project site can move onto the operation stage.
This list, although stating the obvious, makes the network deployment stage one of the most intense and problematic parts of a new scheme. It implies the additional costs, not inconsiderable, of contracting external services.
Generally speaking, the service supplier, or carrier, offers their clients an all-inclusive package with the best possible service/price ratio. They are expected to choose device manufacturers who, not only fulfil the technological and economical side, but who also help keep costs down when actually deploying the network.
One of the most vital components for net deployment is a Zero Touch Installation service for participating devices. This means that service startup, on location, must include the following:
- The client must receive a device, on site, with a basic connection guide (similar to the autoinstallation of home services such as ADSL/FTTH). The client can then simply connect his device, following the basic guide, and switch it on.
- The autoconfiguration process initiates: the device downloads its individual settings, from a control center, and subsequently activates them to provide the contracted service.
- From the control center, the state and availability of said services are detected (validating the whole process).
Having autoconfiguration available during deployment and the use of pre-validated templates reduces the costs of installation by optimizing deadlines and minimizing configuration error rates.
Not only are there advantages at this stage – maintenance operations are notably optimized – but an office with an out of service device only needs a new router to be sent in order to resume business. The rest of the configuration, the recovery of stored data, etc., is almost automatically executed through management tools. This facility both optimizes SLAs and avoids penalization.
In short, autoconfiguration, throughout network deployment, becomes a very relevant aspect and should always be included in the manufacturers’ commercial offer. Not only because this represents an enormous improvement in their products, but also because they can then offer tools, through their sales network, to reduce the global cost of the project and increase the perception of quality of service for their clients.
Teldat fully understands this concept and incorporates their software tool, the Colibrí NetManager, to manage networks. This, in addition to other unified tools and together with their WLAN and access routers, provides optimum solutions for schemes, wherever their clients require them.
Please visit our blogs, or our webpage, for further information on our Colibrí NetManager management tool.