As of today, the solutions offered in business platforms are based on the development and exploitation of scalability features and cost-saving actions by means of platform virtualization using effective technologies designed to modularize their components. (more…)
For a number of years now we have been constantly bombarded by the idea of the cloud, where there is room for virtually everything, whether it is a private cloud or a public one, with servers installed at the client’s head office or in large data centers around the world. When moving in this world, you can never forget the importance of using technology that is as standardized as possible in order to avoid getting bogged down in tedious configurations when deploying your services.
That is why several years ago we saw the birth of technologies like REST, now widespread. REST (Representational State Transfer) is a style of software architecture that defines a number of key features when it comes to exchanging data with a web service. Broadly speaking, it takes advantage of a series of existing protocols and technologies in order to achieve this. This description makes it one of the dominant architectures in the online market, used by thousands of large companies, including, among others, Amazon, eBay and Microsoft.
REST is an architectural style for information exchange, not a standard, and the term was coined by Roy Fielding in his doctoral dissertation.
This article does not seek to give a lesson on what constitutes an existing REST architecture. Nor does it attempt to explain the features required to develop a RESTful application, or how to simply be REST-like or suchlike. Rather, it attempts a brief outline of the implications of the use of this architecture and the benefits stemming from its implementation and exploitation in a production system.
What makes REST one of the currently favored architectures for the exchange of information between client and server?
The main feature of this protocol is the exchange of messages between clients and servers without any state being held. In contrast to other protocols in the web services market, with REST the message already includes all the information necessary so that the participants of the conversation do not need to know the state of that message in advance, thus removing the need to exchange state messages and perform other intermediate operations.
REST-based applications often use the HTTP protocol to transmit the information. This, and the fact that no state is held, ensures maximum standardization of data exchange, which is a huge advantage in the implementation and maintenance of such applications. Given my experience in the world of cloud computing in recent years, and the type of corporate clients and SMEs in today’s market, I will now attempt to list the features that I believe are most beneficial:
- Scalability and load balancing: Virtually everything today is service oriented, taking advantage of the benefits of what we call cloud computing. This is, in the case of a public cloud and very broadly speaking, the leasing of a logical space for data processing and data storage from a service provider. It is usually very difficult to obtain a clear idea of the demand for the service in advance. For this reason, certain automatic or manual mechanisms are used to add new physical or virtual instances that run in parallel to provide the service. In this case, by not storing the state in the server, REST allows the client total transparency of the operation without really needing to know what server they are connecting to at a given time.
- Use of standard protocols: The massive and impressive advantage we gain from this architecture has led it to be commonly associated with HTTP. When deployed at the client’s premises, especially the corporate client’s, with countless departments (systems, communications, perimeters…), the use of something as standard as HTTP (with its ports and standard headers) means that you hardly need to reconfigure anything in the network for the program to work. Firewalls, IDS, anti-spam, antivirus protection, web reputation, everything tends to be prepared to recognize this type of traffic and web traffic generally.
- Security: When referring to HTTP, we are indirectly referring to HTTPS. Simply adding a certificate to our server, we upgrade to the safety standard par excellence for data exchange in web browsing.
- Agility and efficiency: Based as it is on a standard protocol, agility greatly increases in both exploitation and development. Any client can connect to the server, programmed in any language, and there is no need for special configurations and structures that are part and parcel of other architectures. Java, C/C++, C#, Python, Perl… Virtually every programming language barrier disappears when you use HTTP technology to transport something as simple as a “hypermedia”, like an XML. Furthermore, the reference to the different functionalities published in the server is done via the request URI, reducing traffic as it is self-describing, as well as adding headers to transmit information without having to send additional messages.
- Use of network optimization intermediate technologies: HTTP web traffic can be processed by all kinds of intermediate mechanisms, such as, for example, proxy servers, including among them cache and others with security policies. The ability to encapsulate this information in HTTP means that this architecture can interact with other intermediaries with very little effort. These intermediaries let you add an extra degree of optimization and securitization to that already existing in the architecture.
There is a great deal of documentation on the web about the benefits of this architecture over others, and here I have only named a few in passing without going into technical details. It is also important, though, to consider the disadvantages. The biggest one, in my view, is precisely what makes this architecture advantageous, that of maximizing the use of standard protocols and technologies. HTTP/HTTPS is the quintessential information transfer protocol and, therefore, the most tempting. At times it is vulnerable to hackers, especially when you are using a cloud-oriented service. Correct information encryption mechanisms must always be used to avoid uncomfortable situations such as identity theft, credential theft, etc. The developer and also, to a large extent, the client have ultimate responsibility for ensuring a secure message exchange, especially when it comes to services that can operate on an external supplier’s network.
REST: Teldat’s centralized management moves towards convergence
In Teldat, we have been actively working for a number of years on the development of new technologies to manage, not just our physical devices, but all the functions that these devices offer, from Wi-Fi management to the management of applications in corporative routers.
The main idea is to offer our customers this service in the cloud. In order to do this, we have decided to base our management communications and centralized monitoring in the REST architecture. We have also developed a whole array of security solutions on this architecture, transparent to both the user and the network administrator, to prevent potential attacks by third parties, especially when the HTTP protocol is used.
In this way, making the most of the many advantages that this architecture has to offer, we are able to create a real ecosystem where the most important features of the device, including WLAN controllers, access points, network optimization, data synchronization, etc., can coexist, and this is only the beginning. In summary, the use of this architecture allows us to easily combine our technologies with regard to the centralization of management and monitoring, its implementation being practically transparent, making efficient use of the transmitted data and making the most of the technological advantages used today for network optimization.
The new generation of centralized management tools for wireless networks has arrived. We analyze Teldat´s Colibri
Humanity spends a huge percentage of its time searching for models. We search for models of the atmosphere to anticipate weather conditions, we search for models of human behavior to determine voting intentions or to predict market trends. We even search for models in the shape of share charts to setup the pricing for buying or selling stocks, but …Can the world be modelled?
All of a sudden, the office has become a very complicated place with a lot of electronic devices that need to be configured, maintained, powered, secured, actualized, and wired (or maybe not, because they are part of a WLAN network). Even though most of the manufacturers try to make their equipment as simple, as compatible and as plug and play as possible, to the one-man-for-all that has the technical responsibility in a SME, making the most of this mess can be very frustrating.
Is there any way of simplifying SME technical environments?
As far as functionality and technical requirements are concerned, data, voice, security, and management, in the area of SMEs are the main aspects that lead to a purchasing decision. From the perspective of the customer, all features and functions desired must work properly and continuously, because otherwise they lose money and time. And they usually are not in excess of any of them. Once this is the case, further factors, such as usability, deep integration into their technical environment, as well as costs in particular, play a decisive part which many forget, but they may be as important as the functional needs.
The main problem here is that all that technical functionalities are necessary, and up to now, each one requires a different device. Professional daily work including secure and fast access to the Internet and external cloud services, as well as to internal server applications within the company and its branches, is as essential as the flexible convergence of telephony and data services. We know that telephony and data services are already integrated in existing office applications and their processes of the working life in large companies. However, also small companies want to take advantage of the benefits more and more, because complex workflows can be easier and faster overcome. And their point of view is different, since they do not have a specialized department that can handle all the integration and configuration process.
In this regard, costs and effort are always issues to deal with. All points require specialists who even have to adopt a coordinated approach. The firewall should not only guarantee the secure access to all video, voice, data and fax services needed on the Internet and in subsidiaries, but it should also prevent unauthorized access to its own server. Furthermore, it is also necessary to coordinate the existing IT infrastructure, such as network wiring and different wireless technologies, for instance wireless LAN and DECT, as well as different services, so that time critical voice services and data transfer do not hinder each other. Hence, quality of service has to be set up appropriately in the LAN as well as in the WAN, in order to prevent dropouts during telephone calls or even loss of connection. The tuning of each single component presents often a major challenge. Therefore, applications and devices have to interact reliably. A complete transparency between the different systems and the possibility to identify errors are fundamental requirements for SMEs in order to maintain the solution by themselves.
To make things a bit more complicated, Green IT is a further buzzword which influences the decision-making. Green IT does not only mean to make proposals in order to save energy and costs. Worldwide regulations force manufacturers and consumers to pay more and more attention to these points. Thus, it is very important in SMEs to operate services also on virtual servers which is not always the case in the heterogeneous protocol environment of unified communications. In practice, virtual servers are already now the most important precondition for operating locally several services on one server or in the cloud in order to save electricity for computers and especially for their cooling. The number of permanent active devices on the network should be kept small, as much as possible.
Hybird devices simplify SME IT necessities
These all lead to the evidence of a strong trend to fulfill the SME market powerful demand, of highly integrated and compact systems which cover a variety of functions that are offered to the user in a simple way of alignment. Open interfaces for further integration into the environment of SMEs are already important from the first workplace on.
So the answer to the question is yes! There is a way os simplifying the SME IT necessities!
Teldat has brought the experience of several company areas together and offers a powerful compact solution with the devices of the hybrid family, which provide a professional and solid basis to fill the gap in the market with professional one device office solutions.
Please, Connect with us and ask for the Hybird Solutions. You will learn how Teldat can help you out of this problem.
It is quite obvious to say that corporate communications have evolved. Not so long ago, a few decades ago, “dumb” terminals were connected to a mainframe. A significant evolution followed with the introduction of X25, Frame Relay and ISDN. We could say it had the same level of importance to corporate communications, as the discovery of fire had within prehistoric man. However, more recently, IP networks then totally changed the communication landscape again. So much so, that this could be compared to the invention of the wheel in history. Of course, high-speed connections such as DSL and fiber in recent times can be said to be “the Industrial Revolution” of the network communication, making broadband accessible anywhere at all. Finally, today’s trend toward “Cloud Computing” is in some way returning communications to where they started, as the intelligence is once again being centralized within “the Cloud”.
The “Cloud Computing” and its implementation in companies
Cloud Computing is at an initial stage as far as corporate communications are concerned, but nobody doubts that it will grown significantly in a short period of time, as it has grown and is still growing within residential user communications with applications such as Google Apps, Microsoft Office 365 or Dropbox. Moreover, it should not surprise anybody that the residential market is more advanced than the corporate market in ICT and communications. This already occurred with ADSL, FTTH and 4G connectivity. The question is whether corporate clouds will be public, private or hybrid and the pace of corporate migration to the Cloud. However, it is clear that virtualization is here to stay as the advantages that this offers are obvious so what are the benefits of virtualization in companies?
- Reduced CAPEX and OPEX in the network periphery because of hardware and software resource are being centralized in the Cloud.
- A clear improvement in the control, security and reliability of data and applications
- Flexibility in resource allocation.
- License control
Problems which you can find in virtualization
The evolution of applications towards the Cloud is not necessarily problem free. Firstly, connectivity requirements for a proper user experience are more demanding than those required when local processing and storage are in place. So special attention should be paid to issues such as redundancy, security and network optimization. Secondly, some applications that create a large amount of data volume traffic at local level, such as Digital Signage or Content Management, do not scale well in the Cloud and the problem is that we no longer have a local server for those tasks at the local site. The same occurs when non-IP devices such as printers, alarms, access control, web cameras, etc. … requiring a USB o perhaps even a serial port are taken into account. Obviously these require a local interface and local processing to be conducted, so they are adapted to the Cloud. Regardless of all the above, there is a device in the middle of all that has been mentioned above, that needs to be maintained and if all the above is taken into account, it is of utmost importance; that is the router.
The “router” as solution to various problems in Cloud Computing
The router at the branch office is what connects users and applications, so that user experience is entirely dependent on the router’s efficiency and stability. However, what role is the router going to play in the new Cloud Computing scenarios? At first sight, a minimal amount of involvement could be valid, but … could the router expand its role to evolve into a more efficient player within Cloud Computing scenarios? Certainly, this is the way forward. Due to the router’s strategic situation connecting users to applications, it is able to provide the extra security and optimization required in these scenarios, and because of its positioning within the branch office, it could be the extension of Cloud Applications to interact with local devices. Now, the remaining questions are: Does it have the ability/power to run applications? Does it have the storage capacity required by certain applications? Does it have a management tool to safely conduct local processes? In the past, these tasks had not been necessary to be conducted by a router, so the previously mentioned features in routers were not available or were very limited. At most, some artificial solutions were integrated using additional hardware (mini-PC) into the router chassis. Today, fully converged solutions based on multicore processors are possible, integrating in one physical device two virtual devices, Router + Server, each with its own software and Operating System including HDD or SSD and USB interfaces for local devices. These new “Cloud Ready” routers support applications that are not able to run anymore on local servers, such as security (Antivirus, Antispam, SIEM Probes, Content filtering), optimization (Webcache, Videoproxy, Cloud-Replicated-NAS and Virtual Desktops Repository), Local Audit or digital signage (DLNA based). Teldat is specialized in “Cloud Ready” routers, supporting the above mentioned applications which are currently available in our portfolio. What is more, without placing any restrictions on possible applications, as the router has a standard Linux operating system, allowing the development of client or third party apps.