The first wireless LAN /WLAN

wlan historyEverybody at least in Germany and probably in some typical German tourist centers knows the famous Song by Paul Kuhn “There is no beer on Hawaii” but not many people know that a network very similar to the wireless LAN available nowadays was already set up in 1969 by the University in Hawaii. The network’s name was “ALOHAnet” and connected different parts of the university on the island Oahu.

Brief history of the WLAN

The idea was taken up no sooner than at the end of the 1980s. The first IEEE working group was founded in 1991 and was set up the technical basics of the new standard. The first devices were working according to pre-802.11 standard but were not compatible to the later IEEE standard. The data rates of 2 Mbit/s were relatively modest. The technology could not be really established in the first years because the first WLAN cards were very expensive. This changed at the end of 1999 as Apple launched an iBook with an incorporated WLAN card. A base station at a reasonable price was also produced by Apple.

By the way, this was also the time when the company Artem successfully placed the first wireless LAN products on the market. In the meantime Artem has merged with Teldat and wireless LAN products are well-established in our product portfolio.

The latest developments and innovations in WLAN networks

In terms of technology a great deal has happened since then. The transmission power has increased from 2 Mbit/s to several Gigabit/s, the data transmission is encrypted and much more besides. Also, from a commercial point of view, WLAN has become more and more important. Whereas in 2008 less than 100,000 WLAN chips were produced and sold, production numbers for WLAN chips have meanwhile increased up to the incredible number of almost 3 trillion.

This enormous increase is certainly because of the numerous mobile devices in the consumer market. However, not only the providers of inexpensive consumer devices have noticed a remarkable growth. Also suppliers of professional WLAN solutions have noticed an annual double-digit percentage growth.

This vast increase in terms of sales numbers will probably decline a bit. Nevertheless, further improvements of this technology and new applications will drive further growth. Especially in this context, we should mention the new 802.11ac standard. The first generation of this new technology is rather unsuitable for business applications because the new standard requires a very high demand of bandwidth which makes complex installations difficult.

The second generation of 802.11ac chipsets will be interesting for business solutions. These 802.11ac chipsets support the new MU-MIMO (multi-user MIMO). MU-MIMO applies in particular to mobile devices with only MIMO 1×1, which means only one antenna. So clients share the available streams or antennas of an access point in such a way that every client uses a different antenna of the access point. Thus, the maximum number of clients that can simultaneously connect to an access point or frequency has tripled or quadrupled. Therefore, it is possible to supply several hundred clients via one access point and multiplies the overall performance of the network, while it simplifies the setup of high performance WLAN networks for major events.

The setup, planning and installation of a WLAN network in major event locations has been one of Teldat’s competences for many years. The conclusion is that it remains exciting and we will definitely join it.

Hans-Dieter Wahl: WLAN Business Line Manager

Positioning systems and mobility management

teldat mobilityYears ago there was a revolution in overseas travel; we no longer needed to learn the routes of a nation because satellites were able to tell us our position in real time (using GPS).

This technology was taken to another level by the professional market with the appearance of RTLS (Real Time Location Systems). These systems could use GPS signals, RFID (Radio Frequency Identification) or other mechanisms and made it possible to keep track of fleets, position emergency disaster services and control staff or critical resources all from a control center. The devices knew their location and could communicate this information.

(more…)

Francisco Navarro: Francisco Navarro, graduated in Physical Science, is a Business Line Manager working within the Marketing Department and responsible for the Teldat Corporate Routers.

The ABC of SBC: definition, characteristics and advantages

wireless lan controllerThe Firewall is the quintessential element providing network security when you need to interconnect with other networks, allowing outgoing traffic and blocking unsolicited incoming traffic. The Firewall is a necessary element, although it is insufficient for security purposes since some threats are hidden from network firewalls within legitimate-appearing traffic, thus resulting in the need for other specialized protective elements such as antivirus or antispam.

The case of Voice over IP is even more special. Firewalls are generally based on NAT but, unfortunately, VoIP connections are incompatible with NAT. A possible solution would be to open exceptions in the NAT Firewall for Voice over IP. This this is not a good idea, though, because it compromises security and does not protect against Denial of Service and intrusion attacks. Intrusion control deserves special mention, not only at the network layer (which a Firewall could perform) but, primarily, at the application layer, aimed at ensuring legitimate call traffic, avoiding attacks, intrusions and fraud. On top of this and to make matters worse, the VoIP sessions are created randomly as calls are established, further complicating control.

A new element is required to address these risks. This element should monitor and be actively involved in the VoIP sessions established between the internal and external network, ensuring that these connections are properly established and that they are legitimate, secure and reliable. This element is the Session Border Controller (SBC).

What is SBC?

An SBC is basically a Firewall for voice traffic and its job is to ensure that the sessions are legitimate, detecting and blocking potential attacks and intrusions. Another important safety feature (similar to what a Firewall does for data services) is concealing voice services on the internal network from the outside. To perform all of these functions, the SBC sits, like the Firewall, on the border between the internal and external network (hence the name “Border Session Controller”), but at a more internal layer than the Firewall (usually in an intermediate network between the Firewall and the internal network, or DMZ -“Demilitarized Zone” -).

The SBC doesn’t just monitor and control sessions between the internal and external network, it reconstructs them in order to have complete control. That is, when a session is established between the internal and external network, two sessions are actually established, one from the internal element to the SBC, and the other from the SBC to the external element; with the SBC negotiating the call parameters to both ends separately. Not only does this allow for full control of the sessions (who can connect, to where, when, how, detection of attacks and intrusions…) but it also conceals the internal network from the outside. This is a basic SBC behavior that is known as Back to Back User Agent (B2BUA).

Characteristics and advantages

While the SBC’s main feature is usually security, it is by no means the only one. The SBC is usually responsible for the following functions, among others:

  • Interoperability: Establishing sessions even with internal and external network elements that have different signaling (due to the use of different SIP versions or signaling protocols or because of additional security requirements on one side)
  • Numbering plan management: Allowing legitimate connections and blocking attacks and intrusions
  • Transcoding: Converting incompatible codecs
  • Admission Control: Limiting the number of sessions established to avoid exceeding the WAN line capacity
  • Remote user connectivity: For example, using VPNs
  • Quality of Service Management
  • Others…

SBCs arose out of need, catching standards bodies off balance, which created some ambiguity about their roles and limits. Initially SBCs were dedicated devices located at the border between provider networks and their customers or the Internet, evolving towards virtualized networks at times integrated with Firewall and routers. Today it is common to deploy SBC functions even in remote areas to protect the central office’s internal network, especially where there is a direct connection to the internet.

SBCs in Teldat

Teldat routers implement an advanced, comprehensive SBC using various functions included in the software, such as the B2BUA functionality that allows complete control of Voice over IP sessions established between the internal and external network, ensuring interoperability and security, together with other security features like IPSec and securitization of RTSP, TLS and SRTP voice sessions, plus complete control of the IP Quality of Service, Admission Control for VoIP calls based on various parameters, routing table/call screening or codec selection.

Marcel Gil: graduated in Telecommunication Engineering and Master in Telematics (Polytechnic University of Catalunya), is a SD-WAN Business Line Manager at Teldat.

Industry 4.0

Industria 4.0Smart Factories, the Internet of Things, cyber-physical systems, mass customization – these are the futuristic-sounding buzzword when it comes to Industry 4.0.

The term “Industry 4.0” provides information on all the hopes that are placed on the subject. It indicates a possible fourth industrial revolution – thus correspondingly the expectations are high.

(more…)

Heidi Eggerstedt y Frank Auer:

Rest architecture: The provision of centralized services

https://www.teldat.com/blog/wp-content/uploads/2014/07/177402640.jpgFor a number of years now we have been constantly bombarded by the idea of the cloud, where there is room for virtually everything, whether it is a private cloud or a public one, with servers installed at the client’s head office or in large data centers around the world. When moving in this world, you can never forget the importance of using technology that is as standardized as possible in order to avoid getting bogged down in tedious configurations when deploying your services.

That is why several years ago we saw the birth of technologies like REST, now widespread. REST (Representational State Transfer) is a style of software architecture that defines a number of key features when it comes to exchanging data with a web service. Broadly speaking, it takes advantage of a series of existing protocols and technologies in order to achieve this. This description makes it one of the dominant architectures in the online market, used by thousands of large companies, including, among others, Amazon, eBay and Microsoft.

REST is an architectural style for information exchange, not a standard, and the term was coined by Roy Fielding in his doctoral dissertation.

This article does not seek to give a lesson on what constitutes an existing REST architecture. Nor does it attempt to explain the features required to develop a RESTful application, or how to simply be REST-like or suchlike. Rather, it attempts a brief outline of the implications of the use of this architecture and the benefits stemming from its implementation and exploitation in a production system.

What makes REST one of the currently favored architectures for the exchange of information between client and server?

The main feature of this protocol is the exchange of messages between clients and servers without any state being held. In contrast to other protocols in the web services market, with REST the message already includes all the information necessary so that the participants of the conversation do not need to know the state of that message in advance, thus removing the need to exchange state messages and perform other intermediate operations.

REST-based applications often use the HTTP protocol to transmit the information. This, and the fact that no state is held, ensures maximum standardization of data exchange, which is a huge advantage in the implementation and maintenance of such applications. Given my experience in the world of cloud computing in recent years, and the type of corporate clients and SMEs in today’s market, I will now attempt to list the features that I believe are most beneficial:

  • Scalability and load balancing: Virtually everything today is service oriented, taking advantage of the benefits of what we call cloud computing. This is, in the case of a public cloud and very broadly speaking, the leasing of a logical space for data processing and data storage from a service provider. It is usually very difficult to obtain a clear idea of the demand for the service in advance. For this reason, certain automatic or manual mechanisms are used to add new physical or virtual instances that run in parallel to provide the service. In this case, by not storing the state in the server, REST allows the client total transparency of the operation without really needing to know what server they are connecting to at a given time.
  • Use of standard protocols: The massive and impressive advantage we gain from this architecture has led it to be commonly associated with HTTP. When deployed at the client’s premises, especially the corporate client’s, with countless departments (systems, communications, perimeters…), the use of something as standard as HTTP (with its ports and standard headers) means that you hardly need to reconfigure anything in the network for the program to work. Firewalls, IDS, anti-spam, antivirus protection, web reputation, everything tends to be prepared to recognize this type of traffic and web traffic generally.
  • Security: When referring to HTTP, we are indirectly referring to HTTPS. Simply adding a certificate to our server, we upgrade to the safety standard par excellence for data exchange in web browsing.
  • Agility and efficiency: Based as it is on a standard protocol, agility greatly increases in both exploitation and development. Any client can connect to the server, programmed in any language, and there is no need for special configurations and structures that are part and parcel of other architectures. Java, C/C++, C#, Python, Perl… Virtually every programming language barrier disappears when you use HTTP technology to transport something as simple as a “hypermedia”, like an XML. Furthermore, the reference to the different functionalities published in the server is done via the request URI, reducing traffic as it is self-describing, as well as adding headers to transmit information without having to send additional messages.
  • Use of network optimization intermediate technologies: HTTP web traffic can be processed by all kinds of intermediate mechanisms, such as, for example, proxy servers, including among them cache and others with security policies.  The ability to encapsulate this information in HTTP means that this architecture can interact with other intermediaries with very little effort. These intermediaries let you add an extra degree of optimization and securitization to that already existing in the architecture.

There is a great deal of documentation on the web about the benefits of this architecture over others, and here I have only named a few in passing without going into technical details. It is also important, though, to consider the disadvantages. The biggest one, in my view, is precisely what makes this architecture advantageous, that of maximizing the use of standard protocols and technologies. HTTP/HTTPS is the quintessential information transfer protocol and, therefore, the most tempting. At times it is vulnerable to hackers, especially when you are using a cloud-oriented service. Correct information encryption mechanisms must always be used to avoid uncomfortable situations such as identity theft, credential theft, etc. The developer and also, to a large extent, the client have ultimate responsibility for ensuring a secure message exchange, especially when it comes to services that can operate on an external supplier’s network.

REST: Teldat’s centralized management moves towards convergence

In Teldat, we have been actively working for a number of years on the development of new technologies to manage, not just our physical devices, but all the functions that these devices offer, from Wi-Fi management to the management of applications in corporative routers.

The main idea is to offer our customers this service in the cloud. In order to do this, we have decided to base our management communications and centralized monitoring in the REST architecture.  We have also developed a whole array of security solutions on this architecture, transparent to both the user and the network administrator, to prevent potential attacks by third parties, especially when the HTTP protocol is used.

In this way, making the most of the many advantages that this architecture has to offer, we are able to create a real ecosystem where the most important features of the device, including WLAN controllers, access points, network optimization, data synchronization, etc., can coexist, and this is only the beginning. In summary, the use of this architecture allows us to easily combine our technologies with regard to the centralization of management and monitoring, its implementation being practically transparent, making efficient use of the transmitted data and making the most of the technological advantages used today for network optimization.

Felipe Camacho:
Page 33 of 40« First...323334...Last »