Rest architecture: The provision of centralized services

https://www.teldat.com/blog/wp-content/uploads/2014/07/177402640.jpgFor a number of years now we have been constantly bombarded by the idea of the cloud, where there is room for virtually everything, whether it is a private cloud or a public one, with servers installed at the client’s head office or in large data centers around the world. When moving in this world, you can never forget the importance of using technology that is as standardized as possible in order to avoid getting bogged down in tedious configurations when deploying your services.

That is why several years ago we saw the birth of technologies like REST, now widespread. REST (Representational State Transfer) is a style of software architecture that defines a number of key features when it comes to exchanging data with a web service. Broadly speaking, it takes advantage of a series of existing protocols and technologies in order to achieve this. This description makes it one of the dominant architectures in the online market, used by thousands of large companies, including, among others, Amazon, eBay and Microsoft.

REST is an architectural style for information exchange, not a standard, and the term was coined by Roy Fielding in his doctoral dissertation.

This article does not seek to give a lesson on what constitutes an existing REST architecture. Nor does it attempt to explain the features required to develop a RESTful application, or how to simply be REST-like or suchlike. Rather, it attempts a brief outline of the implications of the use of this architecture and the benefits stemming from its implementation and exploitation in a production system.

What makes REST one of the currently favored architectures for the exchange of information between client and server?

The main feature of this protocol is the exchange of messages between clients and servers without any state being held. In contrast to other protocols in the web services market, with REST the message already includes all the information necessary so that the participants of the conversation do not need to know the state of that message in advance, thus removing the need to exchange state messages and perform other intermediate operations.

REST-based applications often use the HTTP protocol to transmit the information. This, and the fact that no state is held, ensures maximum standardization of data exchange, which is a huge advantage in the implementation and maintenance of such applications. Given my experience in the world of cloud computing in recent years, and the type of corporate clients and SMEs in today’s market, I will now attempt to list the features that I believe are most beneficial:

  • Scalability and load balancing: Virtually everything today is service oriented, taking advantage of the benefits of what we call cloud computing. This is, in the case of a public cloud and very broadly speaking, the leasing of a logical space for data processing and data storage from a service provider. It is usually very difficult to obtain a clear idea of the demand for the service in advance. For this reason, certain automatic or manual mechanisms are used to add new physical or virtual instances that run in parallel to provide the service. In this case, by not storing the state in the server, REST allows the client total transparency of the operation without really needing to know what server they are connecting to at a given time.
  • Use of standard protocols: The massive and impressive advantage we gain from this architecture has led it to be commonly associated with HTTP. When deployed at the client’s premises, especially the corporate client’s, with countless departments (systems, communications, perimeters…), the use of something as standard as HTTP (with its ports and standard headers) means that you hardly need to reconfigure anything in the network for the program to work. Firewalls, IDS, anti-spam, antivirus protection, web reputation, everything tends to be prepared to recognize this type of traffic and web traffic generally.
  • Security: When referring to HTTP, we are indirectly referring to HTTPS. Simply adding a certificate to our server, we upgrade to the safety standard par excellence for data exchange in web browsing.
  • Agility and efficiency: Based as it is on a standard protocol, agility greatly increases in both exploitation and development. Any client can connect to the server, programmed in any language, and there is no need for special configurations and structures that are part and parcel of other architectures. Java, C/C++, C#, Python, Perl… Virtually every programming language barrier disappears when you use HTTP technology to transport something as simple as a “hypermedia”, like an XML. Furthermore, the reference to the different functionalities published in the server is done via the request URI, reducing traffic as it is self-describing, as well as adding headers to transmit information without having to send additional messages.
  • Use of network optimization intermediate technologies: HTTP web traffic can be processed by all kinds of intermediate mechanisms, such as, for example, proxy servers, including among them cache and others with security policies.  The ability to encapsulate this information in HTTP means that this architecture can interact with other intermediaries with very little effort. These intermediaries let you add an extra degree of optimization and securitization to that already existing in the architecture.

There is a great deal of documentation on the web about the benefits of this architecture over others, and here I have only named a few in passing without going into technical details. It is also important, though, to consider the disadvantages. The biggest one, in my view, is precisely what makes this architecture advantageous, that of maximizing the use of standard protocols and technologies. HTTP/HTTPS is the quintessential information transfer protocol and, therefore, the most tempting. At times it is vulnerable to hackers, especially when you are using a cloud-oriented service. Correct information encryption mechanisms must always be used to avoid uncomfortable situations such as identity theft, credential theft, etc. The developer and also, to a large extent, the client have ultimate responsibility for ensuring a secure message exchange, especially when it comes to services that can operate on an external supplier’s network.

REST: Teldat’s centralized management moves towards convergence

In Teldat, we have been actively working for a number of years on the development of new technologies to manage, not just our physical devices, but all the functions that these devices offer, from Wi-Fi management to the management of applications in corporative routers.

The main idea is to offer our customers this service in the cloud. In order to do this, we have decided to base our management communications and centralized monitoring in the REST architecture.  We have also developed a whole array of security solutions on this architecture, transparent to both the user and the network administrator, to prevent potential attacks by third parties, especially when the HTTP protocol is used.

In this way, making the most of the many advantages that this architecture has to offer, we are able to create a real ecosystem where the most important features of the device, including WLAN controllers, access points, network optimization, data synchronization, etc., can coexist, and this is only the beginning. In summary, the use of this architecture allows us to easily combine our technologies with regard to the centralization of management and monitoring, its implementation being practically transparent, making efficient use of the transmitted data and making the most of the technological advantages used today for network optimization.

Felipe Camacho:

Control structures for modern microprocessors

Global communication Since the advent of computers a large variety of processors with different instruction set architecture (ISA) have been developed. Their design commenced with the definition of the instruction set and culminated in the implementation of the microarchitecture that complied with the specification. During the eighties interest was centered on determining the most desirable features of that set. In the nineties, however, focus shifted to the development of innovative microarchitecture techniques that were applicable to all of the instruction sets.

The main objective of each new generation of processors has been increased performance over the previous generation and, in recent times, with the additional consideration of a power consumption reduction. But how can performance be measured? Basically in terms of how long it takes to execute a particular program. With a series of very simple manipulations we can express performance on the basis of a number of factors with a precise meaning.

microprocessing graphic

The first term to the right of the equals sign indicates the total number of dynamic instructions that need to be executed by the program, the second, how many machine cycles per instruction (CPI) are consumed on average and the third, the machine cycle time or the inverse of the clock frequency. Performance can be improved by acting on each of the three terms. Unfortunately though they are not always independent and the reduction of one of them can potentially increase the size of the others.

One of the simplest techniques for acting on the second and third terms with a small hardware cost is pipelining. The system or unit is partitioned in multiple stages by means of buffers in a way that splits the process into K subprocesses which run in each of the K stages. For a system that operates on one task at a time, the throughput is equal to 1/D, where D is the latency of a task or the delay associated with its execution by the system; pipelining effectively partitions the instruction processing into multiple stages with a new task begun in each subunit when the previous task leaves it, which occurs every D/K units of time. Processors that get a CPI equal to one using this technique are called scalar and their representatives are the RISC processors designed in the eighties. This same parameter is used to define superscalar processors such as those capable of less than one CPI. The average value of its inverse, IPC (instructions per cycle), is a good measure of the effectiveness of the microarchitecture. The major hurdle for pipelined processors, whether superscalar or not, are the pipeline hazards which make it necessary to stall the execution of the affected stage and subsequent stages that contain the instructions which come next in the program order. Three types are identified: structural hazards arise from resource conflicts when the hardware cannot support some combination of instructions at the same time, control hazards, when a program executes a jump instruction that branches to a destination that is different than predicted, and data hazards, which occur when instructions access data that depends on the result of a previous instruction that is not yet available; This latter type are sorted according to the order of occurrence in the program of the i and j instructions, with i occurring before j, which must be preserved in the pipeline. The varieties are: Read After Write (RAW), when j tries to read an operand before i writes to it, Write after Write (WAW), when j tries to write an operand before it is written by i, and Write After Read (WAR), when j tries to write an operand before it is read by i. The latter two types being false, or name, dependencies, which occur as a result of the limited number of registers defined in the architecture.

How do we manage the dependencies that arise from pipelining and which are complicated further in superscalar processors with multiple units operating simultaneously? Well basically by introducing a series of control structures that are read and updated as the instruction advances through the different stages. For the purposes of our discussion these stages are:

Fetch, Dispatch (IQ), Issue, {eXecute#1 II … II eXecute#N}, Writeback, Commit

The process is more or less as follows: the control unit fetches an instruction every clock cycle and passes it on, either in the next cycle if it comes from I-cache or a few cycles later if it comes from RAM, to the following stage, Dispatch, where four actions are carried out: (1) a physical register, not visible in the programming model, is selected from the free list (FL), (2) the temporary association between that register and the architectural register is written in the Rename Table (RAT) thus eliminating false dependencies (WAW and WAR) by dynamically assigning different names to a single architectural register in an attempt to keep track of the value of the result, not the name of the register, which is possible because there are more physical registers than defined by the architecture, (3) an entry is reserved in the Reorder Buffer (ROB) which is a circular queue that holds the status of physical registers associated with in flight instructions, (4) the instruction is placed in the Issue Queue (IQ) until the operands become available at which point, in the next clock cycle, it is issued to the execution unit if the Scoreboard (SB) determines that the operation can proceed without hazards; if this is the case, the operation starts a cycle later in the execution unit (eXecute#K) which is associated in the SB with the physical register that will hold the result and where the progress is updated every clock cycle. When this is finished the execution unit updates the physical register at the Writeback stage one or several cycles later. This happens at the same time as the status of the entry associated with the register in the ROB is updated to “completed” and the status of the operation in SB is also updated to Writeback; if the instruction in Writeback state is a store instruction the content of the architectural register is temporarily transferred to an intermediate structure called Finished Store Buffer (FSB) to hold the data in case any prior program instruction, still underway in any of the execution units, were to generate an exception (supposed a precise exception model). The instruction is commited in the next stage of the pipeline, one or several cycles later, when the ROB head pointer points to the instruction in the circular queue. The physical register is then transferred to the architectural one updating the status of the machine, the association between both physical and architectural registers held by RAT is freed, the physical register is returned to the pool in FL and, in the case of a write, the FSB content is transferred to the memory, whether D-cache or RAM.

Most processors used in TELDAT routers belong to Freescale’s PowerPC family, of course incorporate all the techniques described above and are superscalar; when an instruction is dispatched to one of the Issue Queues (IQ) (the queue in our example was centralized and in this architecture it is distributed with a queue for each unit) a rename register (RR), equivalent to the physical register, is assigned to the result of the operation and an entry is assigned in the Completion Queue (CQ) that performs ROB functions. At the end of the issue stage, instructions and their operands, if available, are latched into the execution unit reservation stations (RS). The execution completes updating the RR and its status in the CQ. Completed instructions are removed from the CQ in program order and the results are transferred from the RR to the architecture-defined registers a cycle later.

In this article we have tried to show, without going into justifications, the structures that appeared alongside the first RISC, back in the eighties, and their motivations; the pipelining concept, the hazards, real and avoidable, the register renaming technique, etc. Ideas stemming from the algorithm developed by Robert Tomasulo at IBM in 1967 and first implemented in the IBM System/360 Model 91 floating point unit, ideas that laid the foundation for the out-of-order execution found in today’s processors. I hope that the concise nature of this article has, if nothing else, aroused the curiosity of the kind reader.

Manuel Sanchez: Manuel Sánchez González-Pola, Telecommunications Engineer, is part of Teldat’s R&D Department. Within this department he works as a Project Manager in the Hardware team.  

WLAN Site Surveys – Good planning is half the battle

wlan bintecWireless LAN coverage is nowadays mandatory for more and more customers. Meanwhile, WLAN Site Surveys is not only used for office communication like e-mail or web traffic. By now storekeepers use for instance for bar code scanners with integrated WLAN to register their stock or even the whole storage is fully automated by robots with integrated WLAN modules. In these cases good coverage and roaming without a session time out is a must.

(more…)

Lars Michalke:

Centralized Network Management Tools

Photo_Teldat_colibriservice_web The new generation of centralized management tools for wireless networks has arrived. We analyze Teldat´s Colibri
platform.

Humanity spends a huge percentage of its time searching for models. We search for models of the atmosphere to anticipate weather conditions, we search for models of human behavior to determine voting intentions or to predict market trends. We even search for models in the shape of share charts to setup the pricing for buying or selling stocks, but …Can the world be modelled?

(more…)

Rafael Ciria:

ONT SFP GPON: End-customer solution

gpon solutionsWith the deployment of fiber to the home (FTTH), telecommunications operators put their main focus on the residential market. This is logical, since it is where there is volume and where a small increase or decrease in revenue per subscriber is converted into outstanding results on their financial accounts.

As with almost any other access technology, once the number of households with the possibility to receive the service is steadily increasing and the network is stable and running, many operators also choose to use this deployment to offer their services to the business market.

PON (passive optical network) technologies, that enable the deployment of FTTH networks in a cost-effective manner for the residential market, are evolving at a high speed. Not only as far as protocols and standards are concerned, which are constantly enabling further increases in speed, but also at “chip” level, since it manages to implement the mentioned protocols and standards in forever smaller and more efficient integrated circuits.

GPON services

Until now, to offer GPON services, currently one of the most used PON variants, the service provider had to typically install three devices at the customer’s home: (1) the “optical modem”, known in the “official” terminology as ONT, (2) the IP access router, which allows the connection of multiple devices and also typically includes a Wi-Fi access point, and optionally (3) Set-Top-Box or video decoder, if television services (IPTV) are required.

In the majority of cases, the ONT and the access router are really two different devices, when they could be only one device, as is the case with ADSL routers, which include internally an ADSL modem. Apart for using one modem in MTU (multi-user) topologies, in which several routers “hang” from one “optical modem”, this lack of integration is a sign of the lack of technological / GPON service maturity, compared for example with the mentioned ADSL technology.

An “internal type” reason for this separation between the ONT and router is due to the organizational structure within the telecom operators, whereby the ONT is considered as equipment that belongs to the operator’s own network, but located at the customer’s home, while the router is considered as customer’s network equipment and therefore managed by different departments within the operator. However, the end customer does not care and the only thing that he/she sees, is that for a FTTH service, the operator has put two boxes when the ADSL service only requires one box.

ONT GPON devices in SFP format: Convincing solution

The technological advances mentioned above are allowing the emergence of ONT devices with a SFP format (Small Form-factor Pugglable), which are normally used for other more simple optical transceivers. The SFP format allows you to provide communication equipment with fiber interfaces in a modular way, so that in a common “chassis” you can insert or connect different types of fiber. So far using the SFP ONT format devices had been residual, due to the inherent complexity of GPON and the impossibility of implementing all the necessary processing capacity into the small size of the SFP connector. It had reached the SFP ONT device market, but with a special mechanical format “backpack type”, which has consumption and heating problems.

However a new generation of chips, which have a very small size and low power consumption, is making possible the emergence of GPON ONT devices in a standard SFP format. Now the question is whether the operators will be able to organize themselves internally to transfer to the customer the advantages of a much more convenient deployment. This will indicate whether the GPON technology is “maturing”.

At Teldat we hope that this does occur, since we think that the SFP format for a GPON ONT  is a very attractive and convenient solution for the end customer, since it means a lower cost, lower consumption and the reduction or integrating of devices.

Eduardo Tejedor: Telecommunications Engineer, Teldat V.P. Strategic Marketing