At the end of November, the pre-Christmas season usually starts in Germany. The famous Christmas Markets, such as the Christkindle market in Nuremberg, open in every city and people celebrate the first Advent by lighting the first of four candles of the Advent wreath. Usually, the first Advent is the day when the contemplative time starts. The 27th of November, the first Advent in 2016 was for many people in Germany in a particular way very calm. Round about one million DSL routers, mainly devices from Germany’s biggest telecommunications carrier, fell victim to hacker attacks. (more…)
For some companies the question arises whether it is worth having wireless LAN in the company. Others are worried that a wireless LAN network could cause security risks and even attacks on IT systems could be easier.
Banks are currently one of the primary targets of criminals; quick access to cash or personal bank account information is a juicy haul. Automated teller machines (ATMs) are a security weak point and while bank-located machines usually have cameras and other security measures in place, off-site ATMs installed independently don’t have the same kind of infrastructure. There are plenty of articles on the Internet about ATM skimming, which is when a thief attaches an external device to an ATM to capture a card’s electronic data, including the PIN, in order to recreate an exact copy of the card. See this link to read an article from the North American press on ATM skimming.
Many people believe they have completely different electronic systems, but this is not entirely accurate. While it’s true to say there are quite a few differences, they also have a series of characteristics in common.
Without going into technology specifics such as the structure of a memory cell, the distinguishing characteristics of DRAM versus SRAM (static RAM) are basically twofold: (1) the full address is usually presented to SRAM just once, while it is multiplexed to DRAM, first the row and then the column; (2) DRAM also needs to be refreshed periodically to maintain the integrity of stored data.
Firstly, the move of IT infrastructure to the cloud means our current understanding of level 3 network traffic (IP) is insufficient to characterize applications transmitting over said network: Application servers had fixed, known IP addresses in traditional data centers, whereas IP addressing in cloud is no longer controlled by the organization using these services.
Secondly, far more applications (both corporate and personal) are in circulation today than a few years ago. Said applications have not, in general, been designed with bandwidth optimization in mind and all have different needs and behaviors. This means some applications can (and do) adversely affect others if the network is incapable of applying different policies to prevent this.
The vast majority of applications use http and https for communication mainly to evade, or minimize, possible negative effects arising from security policies or IP addressing (NAT) over the network. This means the transport layer (TCP or UDP port) is unable to adequately identify network applications as they tend to use the same ports (http 80 and https 443).
To further aggravate the problem, companies must provide connectivity to an enormous array of ‘authorized’ local devices. Remote local networks today, unlike the traditional single terminal of yesterday, are more varied and far less controlled: Wireless offices, guest access, home access, BYOD, IoT etc. Consequently, the difficulties in analyzing traffic, caching systems and CND also escalate
Finally this greater diversity increases security risks: viruses, malware, bots, etc. These, in turn, tend to generate “uncontrolled” network traffic that needs to be detected and characterized. At this point, the close link between visibility and security at the network level raises its head (with all its repercussions and analysis), a subject that we’ll tackle another day.
The above points make it very clear that analyzing network traffic has become more and more intricate over the last few years, boosting the need for new tools with greater capacity. Otherwise, we simply won’t know what is going through our network, placing it not only at risk but unnecessarily increasing its upkeep. Given the tremendous amount of information handled, using tools that are able to intelligently filter the information received and provide high level of granularity in analysis and reports is absolutely essential. It’s here where big data analysis technologies bring huge advantages when compared to traditional tools.
Well aware of this recent difficulty, users need application visibility and control solutions to meet these new needs.
- Said solutions must be able to scale down to small and medium corporate offices, and offer a sound compromise between CPU requirements (cost), needed for DPI (Deep Packet Inspection), and number of detected applications (customer service and quality of application detection).
- Integrating intelligent detection in remote routers and the use of a centralized management tool, versus current market solutions based on proprietor remote point polling and hardware appliances (also proprietor), allows for excellent detection granularity and affordable exploitation, scalable to any size of network.
- Instead of opting for proprietor solutions, it’s crucial to use suppliers who adopt standard protocols to communicate visibility information (Netflow / IPFIX for example). This allows customers to use their own information collection methods if they so wish.
As part of its access routers and management tool, Colibri Netmanager, Teldat offers visibility and control solutions for network applications capable of meeting the aforementioned market needs.