Nowdays many organizations increasingly look forward to using Cloud technologies and allocating their services on the Cloud. This is not only what global companies or IT companies are working on, this is what any companies are moving towards, if they are looking for a modern and suitable technical solution for their day to day transactions.
Firstly, the move of IT infrastructure to the cloud means our current understanding of level 3 network traffic (IP) is insufficient to characterize applications transmitting over said network: Application servers had fixed, known IP addresses in traditional data centers, whereas IP addressing in cloud is no longer controlled by the organization using these services.
Secondly, far more applications (both corporate and personal) are in circulation today than a few years ago. Said applications have not, in general, been designed with bandwidth optimization in mind and all have different needs and behaviors. This means some applications can (and do) adversely affect others if the network is incapable of applying different policies to prevent this.
The vast majority of applications use http and https for communication mainly to evade, or minimize, possible negative effects arising from security policies or IP addressing (NAT) over the network. This means the transport layer (TCP or UDP port) is unable to adequately identify network applications as they tend to use the same ports (http 80 and https 443).
To further aggravate the problem, companies must provide connectivity to an enormous array of ‘authorized’ local devices. Remote local networks today, unlike the traditional single terminal of yesterday, are more varied and far less controlled: Wireless offices, guest access, home access, BYOD, IoT etc. Consequently, the difficulties in analyzing traffic, caching systems and CND also escalate
Finally this greater diversity increases security risks: viruses, malware, bots, etc. These, in turn, tend to generate “uncontrolled” network traffic that needs to be detected and characterized. At this point, the close link between visibility and security at the network level raises its head (with all its repercussions and analysis), a subject that we’ll tackle another day.
The above points make it very clear that analyzing network traffic has become more and more intricate over the last few years, boosting the need for new tools with greater capacity. Otherwise, we simply won’t know what is going through our network, placing it not only at risk but unnecessarily increasing its upkeep. Given the tremendous amount of information handled, using tools that are able to intelligently filter the information received and provide high level of granularity in analysis and reports is absolutely essential. It’s here where big data analysis technologies bring huge advantages when compared to traditional tools.
Well aware of this recent difficulty, users need application visibility and control solutions to meet these new needs.
- Said solutions must be able to scale down to small and medium corporate offices, and offer a sound compromise between CPU requirements (cost), needed for DPI (Deep Packet Inspection), and number of detected applications (customer service and quality of application detection).
- Integrating intelligent detection in remote routers and the use of a centralized management tool, versus current market solutions based on proprietor remote point polling and hardware appliances (also proprietor), allows for excellent detection granularity and affordable exploitation, scalable to any size of network.
- Instead of opting for proprietor solutions, it’s crucial to use suppliers who adopt standard protocols to communicate visibility information (Netflow / IPFIX for example). This allows customers to use their own information collection methods if they so wish.
As part of its access routers and management tool, Colibri Netmanager, Teldat offers visibility and control solutions for network applications capable of meeting the aforementioned market needs.
It is quite obvious to say that corporate communications have evolved. Not so long ago, a few decades ago, “dumb” terminals were connected to a mainframe. A significant evolution followed with the introduction of X25, Frame Relay and ISDN. We could say it had the same level of importance to corporate communications, as the discovery of fire had within prehistoric man. However, more recently, IP networks then totally changed the communication landscape again. So much so, that this could be compared to the invention of the wheel in history. Of course, high-speed connections such as DSL and fiber in recent times can be said to be “the Industrial Revolution” of the network communication, making broadband accessible anywhere at all. Finally, today’s trend toward “Cloud Computing” is in some way returning communications to where they started, as the intelligence is once again being centralized within “the Cloud”.
The “Cloud Computing” and its implementation in companies
Cloud Computing is at an initial stage as far as corporate communications are concerned, but nobody doubts that it will grown significantly in a short period of time, as it has grown and is still growing within residential user communications with applications such as Google Apps, Microsoft Office 365 or Dropbox. Moreover, it should not surprise anybody that the residential market is more advanced than the corporate market in ICT and communications. This already occurred with ADSL, FTTH and 4G connectivity. The question is whether corporate clouds will be public, private or hybrid and the pace of corporate migration to the Cloud. However, it is clear that virtualization is here to stay as the advantages that this offers are obvious so what are the benefits of virtualization in companies?
- Reduced CAPEX and OPEX in the network periphery because of hardware and software resource are being centralized in the Cloud.
- A clear improvement in the control, security and reliability of data and applications
- Flexibility in resource allocation.
- License control
Problems which you can find in virtualization
The evolution of applications towards the Cloud is not necessarily problem free. Firstly, connectivity requirements for a proper user experience are more demanding than those required when local processing and storage are in place. So special attention should be paid to issues such as redundancy, security and network optimization. Secondly, some applications that create a large amount of data volume traffic at local level, such as Digital Signage or Content Management, do not scale well in the Cloud and the problem is that we no longer have a local server for those tasks at the local site. The same occurs when non-IP devices such as printers, alarms, access control, web cameras, etc. … requiring a USB o perhaps even a serial port are taken into account. Obviously these require a local interface and local processing to be conducted, so they are adapted to the Cloud. Regardless of all the above, there is a device in the middle of all that has been mentioned above, that needs to be maintained and if all the above is taken into account, it is of utmost importance; that is the router.
The “router” as solution to various problems in Cloud Computing
The router at the branch office is what connects users and applications, so that user experience is entirely dependent on the router’s efficiency and stability. However, what role is the router going to play in the new Cloud Computing scenarios? At first sight, a minimal amount of involvement could be valid, but … could the router expand its role to evolve into a more efficient player within Cloud Computing scenarios? Certainly, this is the way forward. Due to the router’s strategic situation connecting users to applications, it is able to provide the extra security and optimization required in these scenarios, and because of its positioning within the branch office, it could be the extension of Cloud Applications to interact with local devices. Now, the remaining questions are: Does it have the ability/power to run applications? Does it have the storage capacity required by certain applications? Does it have a management tool to safely conduct local processes? In the past, these tasks had not been necessary to be conducted by a router, so the previously mentioned features in routers were not available or were very limited. At most, some artificial solutions were integrated using additional hardware (mini-PC) into the router chassis. Today, fully converged solutions based on multicore processors are possible, integrating in one physical device two virtual devices, Router + Server, each with its own software and Operating System including HDD or SSD and USB interfaces for local devices. These new “Cloud Ready” routers support applications that are not able to run anymore on local servers, such as security (Antivirus, Antispam, SIEM Probes, Content filtering), optimization (Webcache, Videoproxy, Cloud-Replicated-NAS and Virtual Desktops Repository), Local Audit or digital signage (DLNA based). Teldat is specialized in “Cloud Ready” routers, supporting the above mentioned applications which are currently available in our portfolio. What is more, without placing any restrictions on possible applications, as the router has a standard Linux operating system, allowing the development of client or third party apps.