https://www.teldat.com/wp-content/uploads/2022/06/cropped-guillermo_lopez_autor-500x500-1-96x96.png

TELDAT Blog

Communicate with us

Operating System-level Virtualization

Oct 3, 2018

virtualizationFor more than a decade now, hardware virtualization technology, more commonly known as virtual machines, has been the technology on which the production systems we know and use today in our information society have been based, and has made – in the midst of an economic crisis – the evolution of Internet-connected software and services profitable and sustainable.

Let’s not forget, however, that virtualization can be implemented on different levels, and the technological evolution never ceases in its efforts to create ever more powerful and efficient systems, like, for example, Operating System-level Virtualization – a technology that’s been causing quite a stir in recent years.

Virtualization is defined as the abstraction of computing resources, such as Hardware, Operating System, Storage systems or Network resources, which, through software, we can divide into running environments to maximize the capacity of the underlying resource. This software layer arbitrates the available resources, dynamically distributing the real resources among all the defined virtual resources.

Container-based Virtualization, also called Operating System-level Virtualization or Containerization, is an approach to virtualization whereby the virtualization layer runs as an application within the operating system.
In this approach, containers are isolated virtual machines, installed on the same operating system. Each container has its own specific software, but shares the host operating system kernel, and, usually, binaries and libraries too. Using these shared components as read-only.

 

What is container-based virtualization?

A container basically consists of 4 elements:

1. Demon: The platform’s main process.
2. Client: The binary constituting the interface and allowing the user to interact with the Demon.
3. Image: The template used to create the container for the application you want to run.
4. File system that stores everything needed (libraries, dependencies, binaries, etc.) so the application can run by itself.

Thanks to containerization, you get real portability and can predict the behavior of a software program when moving from one server to another, since the container only encapsulates the software specific to the application that runs inside it and the libraries it depends on to run correctly, using the host system’s operating system. Thus abstracting the server where the software will be run, whether in physical facilities, the public cloud, the private cloud, etc.

With container-based virtualization, you don’t get the overhead associated with having every guest running a full operating system. The server will require more hardware, but all its resources will be used directly by the running software, and will in no case be consumed by the needs of the operating system and/or its associated processes. This approach can also improve performance because it means that a single operating system deals with hardware warnings.

 

Because of this, containers tend to be exceptionally lightweight, occupying only a few megabytes of disk space and requiring seconds to boot, versus the gigabytes and minutes required for physical and virtual machines. Using containers also reduces systems administration workload. Since containers share an operating system, only that one requires updating, package vulnerability management and other administrative tasks. Containers are generally faster, more agile and more portable than software installed on physical and virtual machines.

Docker is one of the most utilized and well-known projects in this type of virtualization. Far from being an operating system as such, this open source platform uses the Linux kernel resource isolation features to produce independent containers. Docker also has a number of repositories, similar to the Linux ones, where software manufacturers and users publish their own containers so that anyone that needs them can quickly download them there.

Teldat’s SD-WAN technology uses operating system-level virtualization in its core infrastructure, but goes even further. By having an ecosystem with a large number of containerized services, we have been able to implement the micro-service orchestrator figure. This system allows us to abstract the hardware, operating systems or storage, of the containerized software services, clusters, networks and load balancers (among others), which are really giving the service to the user in a transparent way, in fault tolerant and high availability environments.

 

Related PostsÂ