Why has visualization of applications over network become such a critical point?
Firstly, the move of IT infrastructure to the cloud means our current understanding of level 3 network traffic (IP) is insufficient to characterize applications transmitting over said network: Application servers had fixed, known IP addresses in traditional data centers, whereas IP addressing in cloud is no longer controlled by the organization using these services.
Secondly, far more applications (both corporate and personal) are in circulation today than a few years ago. Said applications have not, in general, been designed with bandwidth optimization in mind and all have different needs and behaviors. This means some applications can (and do) adversely affect others if the network is incapable of applying different policies to prevent this.
The vast majority of applications use http and https for communication mainly to evade, or minimize, possible negative effects arising from security policies or IP addressing (NAT) over the network. This means the transport layer (TCP or UDP port) is unable to adequately identify network applications as they tend to use the same ports (http 80 and https 443).
To further aggravate the problem, companies must provide connectivity to an enormous array of ‘authorized’ local devices. Remote local networks today, unlike the traditional single terminal of yesterday, are more varied and far less controlled: Wireless offices, guest access, home access, BYOD, IoT etc. Consequently, the difficulties in analyzing traffic, caching systems and CND also escalate
Finally this greater diversity increases security risks: viruses, malware, bots, etc. These, in turn, tend to generate “uncontrolled” network traffic that needs to be detected and characterized. At this point, the close link between visibility and security at the network level raises its head (with all its repercussions and analysis), a subject that we’ll tackle another day.
The above points make it very clear that analyzing network traffic has become more and more intricate over the last few years, boosting the need for new tools with greater capacity. Otherwise, we simply won’t know what is going through our network, placing it not only at risk but unnecessarily increasing its upkeep. Given the tremendous amount of information handled, using tools that are able to intelligently filter the information received and provide high level of granularity in analysis and reports is absolutely essential. It’s here where big data analysis technologies bring huge advantages when compared to traditional tools.
Well aware of this recent difficulty, users need application visibility and control solutions to meet these new needs.
- Said solutions must be able to scale down to small and medium corporate offices, and offer a sound compromise between CPU requirements (cost), needed for DPI (Deep Packet Inspection), and number of detected applications (customer service and quality of application detection).
- Integrating intelligent detection in remote routers and the use of a centralized management tool, versus current market solutions based on proprietor remote point polling and hardware appliances (also proprietor), allows for excellent detection granularity and affordable exploitation, scalable to any size of network.
- Instead of opting for proprietor solutions, it’s crucial to use suppliers who adopt standard protocols to communicate visibility information (Netflow / IPFIX for example). This allows customers to use their own information collection methods if they so wish.
As part of its access routers and management tool, Colibri Netmanager, Teldat offers visibility and control solutions for network applications capable of meeting the aforementioned market needs.
Schemes to change network infrastructure should always be linked to improving user perception on the quality of the service received. Prior to starting such a project, there are both technical and commercial proposal phases and evaluations that require approval.
Once this has been achieved, the main stages are:
- Operation and management.
- The break down or transition of the network to a new project.
The deployment stage is where operations involving startup/installation of devices are carried out; the end result being the contracted service. This project stage is where:
1. The agreed service deadlines and delivery must be met.
2. Tools, simplifying the foreseeable mass configuration tasks, must be available.
3. A team of specialists must oversee the installation of devices, at a previously agreed location.
4. Validation of the entire startup process, so the project site can move onto the operation stage.
This list, although stating the obvious, makes the network deployment stage one of the most intense and problematic parts of a new scheme. It implies the additional costs, not inconsiderable, of contracting external services.
Generally speaking, the service supplier, or carrier, offers their clients an all-inclusive package with the best possible service/price ratio. They are expected to choose device manufacturers who, not only fulfil the technological and economical side, but who also help keep costs down when actually deploying the network.
One of the most vital components for net deployment is a Zero Touch Installation service for participating devices. This means that service startup, on location, must include the following:
- The client must receive a device, on site, with a basic connection guide (similar to the autoinstallation of home services such as ADSL/FTTH). The client can then simply connect his device, following the basic guide, and switch it on.
- The autoconfiguration process initiates: the device downloads its individual settings, from a control center, and subsequently activates them to provide the contracted service.
- From the control center, the state and availability of said services are detected (validating the whole process).
Having autoconfiguration available during deployment and the use of pre-validated templates reduces the costs of installation by optimizing deadlines and minimizing configuration error rates.
Not only are there advantages at this stage – maintenance operations are notably optimized – but an office with an out of service device only needs a new router to be sent in order to resume business. The rest of the configuration, the recovery of stored data, etc., is almost automatically executed through management tools. This facility both optimizes SLAs and avoids penalization.
In short, autoconfiguration, throughout network deployment, becomes a very relevant aspect and should always be included in the manufacturers’ commercial offer. Not only because this represents an enormous improvement in their products, but also because they can then offer tools, through their sales network, to reduce the global cost of the project and increase the perception of quality of service for their clients.
Teldat fully understands this concept and incorporates their software tool, the Colibrí NetManager, to manage networks. This, in addition to other unified tools and together with their WLAN and access routers, provides optimum solutions for schemes, wherever their clients require them.
Please visit our blogs, or our webpage, for further information on our Colibrí NetManager management tool.
Francisco Navarro: Francisco Navarro, graduated in Physical Science, is a Business Line Manager working within the Marketing Department and responsible for the Teldat Corporate Routers.
The new generation of centralized management tools for wireless networks has arrived. We analyze Teldat´s Colibri
Humanity spends a huge percentage of its time searching for models. We search for models of the atmosphere to anticipate weather conditions, we search for models of human behavior to determine voting intentions or to predict market trends. We even search for models in the shape of share charts to setup the pricing for buying or selling stocks, but …Can the world be modelled?