Positioning systems and mobility management

teldat mobilityYears ago there was a revolution in overseas travel; we no longer needed to learn the routes of a nation because satellites were able to tell us our position in real time (using GPS).

This technology was taken to another level by the professional market with the appearance of RTLS (Real Time Location Systems). These systems could use GPS signals, RFID (Radio Frequency Identification) or other mechanisms and made it possible to keep track of fleets, position emergency disaster services and control staff or critical resources all from a control center. The devices knew their location and could communicate this information.

(more…)

Francisco Navarro: Francisco Navarro, graduated in Physical Science, is a Business Line Manager working within the Marketing Department and responsible for the Teldat Corporate Routers.

The ABC of SBC: definition, characteristics and advantages

wireless lan controllerThe Firewall is the quintessential element providing network security when you need to interconnect with other networks, allowing outgoing traffic and blocking unsolicited incoming traffic. The Firewall is a necessary element, although it is insufficient for security purposes since some threats are hidden from network firewalls within legitimate-appearing traffic, thus resulting in the need for other specialized protective elements such as antivirus or antispam.

The case of Voice over IP is even more special. Firewalls are generally based on NAT but, unfortunately, VoIP connections are incompatible with NAT. A possible solution would be to open exceptions in the NAT Firewall for Voice over IP. This this is not a good idea, though, because it compromises security and does not protect against Denial of Service and intrusion attacks. Intrusion control deserves special mention, not only at the network layer (which a Firewall could perform) but, primarily, at the application layer, aimed at ensuring legitimate call traffic, avoiding attacks, intrusions and fraud. On top of this and to make matters worse, the VoIP sessions are created randomly as calls are established, further complicating control.

A new element is required to address these risks. This element should monitor and be actively involved in the VoIP sessions established between the internal and external network, ensuring that these connections are properly established and that they are legitimate, secure and reliable. This element is the Session Border Controller (SBC).

What is SBC?

An SBC is basically a Firewall for voice traffic and its job is to ensure that the sessions are legitimate, detecting and blocking potential attacks and intrusions. Another important safety feature (similar to what a Firewall does for data services) is concealing voice services on the internal network from the outside. To perform all of these functions, the SBC sits, like the Firewall, on the border between the internal and external network (hence the name “Border Session Controller”), but at a more internal layer than the Firewall (usually in an intermediate network between the Firewall and the internal network, or DMZ -“Demilitarized Zone” -).

The SBC doesn’t just monitor and control sessions between the internal and external network, it reconstructs them in order to have complete control. That is, when a session is established between the internal and external network, two sessions are actually established, one from the internal element to the SBC, and the other from the SBC to the external element; with the SBC negotiating the call parameters to both ends separately. Not only does this allow for full control of the sessions (who can connect, to where, when, how, detection of attacks and intrusions…) but it also conceals the internal network from the outside. This is a basic SBC behavior that is known as Back to Back User Agent (B2BUA).

Characteristics and advantages

While the SBC’s main feature is usually security, it is by no means the only one. The SBC is usually responsible for the following functions, among others:

  • Interoperability: Establishing sessions even with internal and external network elements that have different signaling (due to the use of different SIP versions or signaling protocols or because of additional security requirements on one side)
  • Numbering plan management: Allowing legitimate connections and blocking attacks and intrusions
  • Transcoding: Converting incompatible codecs
  • Admission Control: Limiting the number of sessions established to avoid exceeding the WAN line capacity
  • Remote user connectivity: For example, using VPNs
  • Quality of Service Management
  • Others…

SBCs arose out of need, catching standards bodies off balance, which created some ambiguity about their roles and limits. Initially SBCs were dedicated devices located at the border between provider networks and their customers or the Internet, evolving towards virtualized networks at times integrated with Firewall and routers. Today it is common to deploy SBC functions even in remote areas to protect the central office’s internal network, especially where there is a direct connection to the internet.

SBCs in Teldat

Teldat routers implement an advanced, comprehensive SBC using various functions included in the software, such as the B2BUA functionality that allows complete control of Voice over IP sessions established between the internal and external network, ensuring interoperability and security, together with other security features like IPSec and securitization of RTSP, TLS and SRTP voice sessions, plus complete control of the IP Quality of Service, Admission Control for VoIP calls based on various parameters, routing table/call screening or codec selection.

Marcel Gil: graduated in Telecommunication Engineering and Master in Telematics (Polytechnic University of Catalunya), is a SD-WAN Business Line Manager at Teldat.

Industry 4.0

Industria 4.0Smart Factories, the Internet of Things, cyber-physical systems, mass customization – these are the futuristic-sounding buzzword when it comes to Industry 4.0.

The term “Industry 4.0” provides information on all the hopes that are placed on the subject. It indicates a possible fourth industrial revolution – thus correspondingly the expectations are high.

(more…)

Heidi Eggerstedt y Frank Auer:

Rest architecture: The provision of centralized services

https://www.teldat.com/blog/wp-content/uploads/2014/07/177402640.jpgFor a number of years now we have been constantly bombarded by the idea of the cloud, where there is room for virtually everything, whether it is a private cloud or a public one, with servers installed at the client’s head office or in large data centers around the world. When moving in this world, you can never forget the importance of using technology that is as standardized as possible in order to avoid getting bogged down in tedious configurations when deploying your services.

That is why several years ago we saw the birth of technologies like REST, now widespread. REST (Representational State Transfer) is a style of software architecture that defines a number of key features when it comes to exchanging data with a web service. Broadly speaking, it takes advantage of a series of existing protocols and technologies in order to achieve this. This description makes it one of the dominant architectures in the online market, used by thousands of large companies, including, among others, Amazon, eBay and Microsoft.

REST is an architectural style for information exchange, not a standard, and the term was coined by Roy Fielding in his doctoral dissertation.

This article does not seek to give a lesson on what constitutes an existing REST architecture. Nor does it attempt to explain the features required to develop a RESTful application, or how to simply be REST-like or suchlike. Rather, it attempts a brief outline of the implications of the use of this architecture and the benefits stemming from its implementation and exploitation in a production system.

What makes REST one of the currently favored architectures for the exchange of information between client and server?

The main feature of this protocol is the exchange of messages between clients and servers without any state being held. In contrast to other protocols in the web services market, with REST the message already includes all the information necessary so that the participants of the conversation do not need to know the state of that message in advance, thus removing the need to exchange state messages and perform other intermediate operations.

REST-based applications often use the HTTP protocol to transmit the information. This, and the fact that no state is held, ensures maximum standardization of data exchange, which is a huge advantage in the implementation and maintenance of such applications. Given my experience in the world of cloud computing in recent years, and the type of corporate clients and SMEs in today’s market, I will now attempt to list the features that I believe are most beneficial:

  • Scalability and load balancing: Virtually everything today is service oriented, taking advantage of the benefits of what we call cloud computing. This is, in the case of a public cloud and very broadly speaking, the leasing of a logical space for data processing and data storage from a service provider. It is usually very difficult to obtain a clear idea of the demand for the service in advance. For this reason, certain automatic or manual mechanisms are used to add new physical or virtual instances that run in parallel to provide the service. In this case, by not storing the state in the server, REST allows the client total transparency of the operation without really needing to know what server they are connecting to at a given time.
  • Use of standard protocols: The massive and impressive advantage we gain from this architecture has led it to be commonly associated with HTTP. When deployed at the client’s premises, especially the corporate client’s, with countless departments (systems, communications, perimeters…), the use of something as standard as HTTP (with its ports and standard headers) means that you hardly need to reconfigure anything in the network for the program to work. Firewalls, IDS, anti-spam, antivirus protection, web reputation, everything tends to be prepared to recognize this type of traffic and web traffic generally.
  • Security: When referring to HTTP, we are indirectly referring to HTTPS. Simply adding a certificate to our server, we upgrade to the safety standard par excellence for data exchange in web browsing.
  • Agility and efficiency: Based as it is on a standard protocol, agility greatly increases in both exploitation and development. Any client can connect to the server, programmed in any language, and there is no need for special configurations and structures that are part and parcel of other architectures. Java, C/C++, C#, Python, Perl… Virtually every programming language barrier disappears when you use HTTP technology to transport something as simple as a “hypermedia”, like an XML. Furthermore, the reference to the different functionalities published in the server is done via the request URI, reducing traffic as it is self-describing, as well as adding headers to transmit information without having to send additional messages.
  • Use of network optimization intermediate technologies: HTTP web traffic can be processed by all kinds of intermediate mechanisms, such as, for example, proxy servers, including among them cache and others with security policies.  The ability to encapsulate this information in HTTP means that this architecture can interact with other intermediaries with very little effort. These intermediaries let you add an extra degree of optimization and securitization to that already existing in the architecture.

There is a great deal of documentation on the web about the benefits of this architecture over others, and here I have only named a few in passing without going into technical details. It is also important, though, to consider the disadvantages. The biggest one, in my view, is precisely what makes this architecture advantageous, that of maximizing the use of standard protocols and technologies. HTTP/HTTPS is the quintessential information transfer protocol and, therefore, the most tempting. At times it is vulnerable to hackers, especially when you are using a cloud-oriented service. Correct information encryption mechanisms must always be used to avoid uncomfortable situations such as identity theft, credential theft, etc. The developer and also, to a large extent, the client have ultimate responsibility for ensuring a secure message exchange, especially when it comes to services that can operate on an external supplier’s network.

REST: Teldat’s centralized management moves towards convergence

In Teldat, we have been actively working for a number of years on the development of new technologies to manage, not just our physical devices, but all the functions that these devices offer, from Wi-Fi management to the management of applications in corporative routers.

The main idea is to offer our customers this service in the cloud. In order to do this, we have decided to base our management communications and centralized monitoring in the REST architecture.  We have also developed a whole array of security solutions on this architecture, transparent to both the user and the network administrator, to prevent potential attacks by third parties, especially when the HTTP protocol is used.

In this way, making the most of the many advantages that this architecture has to offer, we are able to create a real ecosystem where the most important features of the device, including WLAN controllers, access points, network optimization, data synchronization, etc., can coexist, and this is only the beginning. In summary, the use of this architecture allows us to easily combine our technologies with regard to the centralization of management and monitoring, its implementation being practically transparent, making efficient use of the transmitted data and making the most of the technological advantages used today for network optimization.

Felipe Camacho:

Control structures for modern microprocessors

Global communication Since the advent of computers a large variety of processors with different instruction set architecture (ISA) have been developed. Their design commenced with the definition of the instruction set and culminated in the implementation of the microarchitecture that complied with the specification. During the eighties interest was centered on determining the most desirable features of that set. In the nineties, however, focus shifted to the development of innovative microarchitecture techniques that were applicable to all of the instruction sets.

The main objective of each new generation of processors has been increased performance over the previous generation and, in recent times, with the additional consideration of a power consumption reduction. But how can performance be measured? Basically in terms of how long it takes to execute a particular program. With a series of very simple manipulations we can express performance on the basis of a number of factors with a precise meaning.

microprocessing graphic

The first term to the right of the equals sign indicates the total number of dynamic instructions that need to be executed by the program, the second, how many machine cycles per instruction (CPI) are consumed on average and the third, the machine cycle time or the inverse of the clock frequency. Performance can be improved by acting on each of the three terms. Unfortunately though they are not always independent and the reduction of one of them can potentially increase the size of the others.

One of the simplest techniques for acting on the second and third terms with a small hardware cost is pipelining. The system or unit is partitioned in multiple stages by means of buffers in a way that splits the process into K subprocesses which run in each of the K stages. For a system that operates on one task at a time, the throughput is equal to 1/D, where D is the latency of a task or the delay associated with its execution by the system; pipelining effectively partitions the instruction processing into multiple stages with a new task begun in each subunit when the previous task leaves it, which occurs every D/K units of time. Processors that get a CPI equal to one using this technique are called scalar and their representatives are the RISC processors designed in the eighties. This same parameter is used to define superscalar processors such as those capable of less than one CPI. The average value of its inverse, IPC (instructions per cycle), is a good measure of the effectiveness of the microarchitecture. The major hurdle for pipelined processors, whether superscalar or not, are the pipeline hazards which make it necessary to stall the execution of the affected stage and subsequent stages that contain the instructions which come next in the program order. Three types are identified: structural hazards arise from resource conflicts when the hardware cannot support some combination of instructions at the same time, control hazards, when a program executes a jump instruction that branches to a destination that is different than predicted, and data hazards, which occur when instructions access data that depends on the result of a previous instruction that is not yet available; This latter type are sorted according to the order of occurrence in the program of the i and j instructions, with i occurring before j, which must be preserved in the pipeline. The varieties are: Read After Write (RAW), when j tries to read an operand before i writes to it, Write after Write (WAW), when j tries to write an operand before it is written by i, and Write After Read (WAR), when j tries to write an operand before it is read by i. The latter two types being false, or name, dependencies, which occur as a result of the limited number of registers defined in the architecture.

How do we manage the dependencies that arise from pipelining and which are complicated further in superscalar processors with multiple units operating simultaneously? Well basically by introducing a series of control structures that are read and updated as the instruction advances through the different stages. For the purposes of our discussion these stages are:

Fetch, Dispatch (IQ), Issue, {eXecute#1 II … II eXecute#N}, Writeback, Commit

The process is more or less as follows: the control unit fetches an instruction every clock cycle and passes it on, either in the next cycle if it comes from I-cache or a few cycles later if it comes from RAM, to the following stage, Dispatch, where four actions are carried out: (1) a physical register, not visible in the programming model, is selected from the free list (FL), (2) the temporary association between that register and the architectural register is written in the Rename Table (RAT) thus eliminating false dependencies (WAW and WAR) by dynamically assigning different names to a single architectural register in an attempt to keep track of the value of the result, not the name of the register, which is possible because there are more physical registers than defined by the architecture, (3) an entry is reserved in the Reorder Buffer (ROB) which is a circular queue that holds the status of physical registers associated with in flight instructions, (4) the instruction is placed in the Issue Queue (IQ) until the operands become available at which point, in the next clock cycle, it is issued to the execution unit if the Scoreboard (SB) determines that the operation can proceed without hazards; if this is the case, the operation starts a cycle later in the execution unit (eXecute#K) which is associated in the SB with the physical register that will hold the result and where the progress is updated every clock cycle. When this is finished the execution unit updates the physical register at the Writeback stage one or several cycles later. This happens at the same time as the status of the entry associated with the register in the ROB is updated to “completed” and the status of the operation in SB is also updated to Writeback; if the instruction in Writeback state is a store instruction the content of the architectural register is temporarily transferred to an intermediate structure called Finished Store Buffer (FSB) to hold the data in case any prior program instruction, still underway in any of the execution units, were to generate an exception (supposed a precise exception model). The instruction is commited in the next stage of the pipeline, one or several cycles later, when the ROB head pointer points to the instruction in the circular queue. The physical register is then transferred to the architectural one updating the status of the machine, the association between both physical and architectural registers held by RAT is freed, the physical register is returned to the pool in FL and, in the case of a write, the FSB content is transferred to the memory, whether D-cache or RAM.

Most processors used in TELDAT routers belong to Freescale’s PowerPC family, of course incorporate all the techniques described above and are superscalar; when an instruction is dispatched to one of the Issue Queues (IQ) (the queue in our example was centralized and in this architecture it is distributed with a queue for each unit) a rename register (RR), equivalent to the physical register, is assigned to the result of the operation and an entry is assigned in the Completion Queue (CQ) that performs ROB functions. At the end of the issue stage, instructions and their operands, if available, are latched into the execution unit reservation stations (RS). The execution completes updating the RR and its status in the CQ. Completed instructions are removed from the CQ in program order and the results are transferred from the RR to the architecture-defined registers a cycle later.

In this article we have tried to show, without going into justifications, the structures that appeared alongside the first RISC, back in the eighties, and their motivations; the pipelining concept, the hazards, real and avoidable, the register renaming technique, etc. Ideas stemming from the algorithm developed by Robert Tomasulo at IBM in 1967 and first implemented in the IBM System/360 Model 91 floating point unit, ideas that laid the foundation for the out-of-order execution found in today’s processors. I hope that the concise nature of this article has, if nothing else, aroused the curiosity of the kind reader.

Manuel Sanchez: Manuel Sánchez González-Pola, Telecommunications Engineer, is part of Teldat’s R&D Department. Within this department he works as a Project Manager in the Hardware team.  
Page 34 of 41« First...333435...Last »