Skip to main content

Cloud Orchestrator: Between IT and TELCO


All the debats and discussions today on CSPs and Telco operators are about IT and Telco convergence.

In fact, after this big success of IT on achieving an important percent of  infrastructure consolidation  across different domains starting from Servers, going to storage and  finishing by  Data center network.

Virtualzation of the different components of infrastructure had pushed IT infrastructure vendors to think about reducing more time to market and be more efficient by introducing cloud management platforms. Such platforms had proved their utility today when speaking about workload orchestration and automatic provisioning. Service Catalogs and End user Portal have helped customers to take control of all the chain and request services as he want.

Infrastructure is coming more oriented service more than before. life cycle management helps IT administrator to manage the hall provisioning chain from request service to resource and capacity management.

There is not more to wait for month before having your infrastructure platform ready to start installing and running your application. Even more, if you are a development team, and you are requesting for a Virtual machine that include debugging tools or web servers, with cloud automation service, you can chose your specific request and the orchstrator engine will execute the right workflow to satisfy your need. It's more fast if your Virtual machine including tools is already defined on the service catalog, Cloud orchestrator will just instantiate it for you.

When it come to a repetitive provisioning task, usually enterprise cloud architect define the needed templates to be included on the service catalog and so on the cloud marketspace.

When starting designing service for enterprise private cloud, designer should analyse the requirements of internal customers and try to implement these  requirements on the cloud solution.
This can be by defining the needed templates regarding the different types of OS and the right version of software in case there is a common use of some tools.For example, Cloud Designer can define one template including Ubuntu OS and Apache web server and MySQL server.
Cloud designer have to implement one template for each type of OS.
Imagine you have many types of OS: Redhat, Centos, debian, Ubuntu... how many templates you have to implement.

Orchestration can help you orchestrate what you want to automate.Imagine now you have already prepared one template per OS.And you just define workflow to instantiate one VM from a specific template and then through orchestrator you execute the right steps to install your software within the right version you want.

Orchestrator bring this elasticity on provisioning what you want, how you want, with the needed version of software.

This elasticity of provisioning is not enough. Imagine now you offer to you public cloud customers Data center as a service. Many Cloud service provider are offering such offer.
Imagine you give to your customer (Enterprise IT administrator I suppose) the ability to design his own data center : VMs for Database, VMs for applications layer, VMs for web access, DMZ, adding vRouters, adding vFW, adding vLB.

This flexibility of creating virtual instance with the conservation of  functional logic behind the provided Data center is checked and done by the power of orchestrator.
Moreover you have a powerful cloud orchestrator, moreover you design more efficacy your service.

It's important to know the right capability for you orchestrator before judge any Cloud solutions.

Many of this cloud provider vendors are offering their solution for IT needs: even infrastructure as a service to software as a service.

Current Cloud provider vendor like Vmware, IBM , Cisco, HP ... and others didn't focus on the auto-scaling features needed for same application use cases.

The big challenge for Cloud solution was to reduce the time of provisioning services after doing all of this work of infrastructure consolidation and virtualization. And even more, these uses cases of applications that need  resources  scaling are been treated manually by cloud administrators, since monitiring is well implemented and administrator are almost time alerted if some component need scaling in or out.

OpenStack framework and Vmware includes these features of auto-scaling. It's up to application owner to design his application to ask for scaling resources through API.

Openstack Heat component allow application developer controling auto-scaling of his application regarding resources needed.




Let's go back and analyse the  beginning of Iaas cloud solutions. All infrastructure vendors have published their first cloud solution to satisfy the need of Infrastructure as a service for the enterprise private cloud  domain. All the debate was then about provisioning Virtual machine from Templates.
After that many vendors have added many capabilities to their solutions and defined many templates to extend their service catalog.

Work didn't stoped at this stage. And many vendors have pushed their solutions to be deployed for Public Cloud needs.So Iaas is coming very developed and we sale Daatacenter As a service and vendors had brought many features to their cloud components regarding orchestration, management layers, invoincing, product catalog, order management ....

Now we speak more about hybrid cloud and how to implement it.Is it the right model to do balance the investissement for Private and Public Cloud.

The sucess for IT is  coming now to business. IT now generates money for companies through cloud computing. In addition to  reducing cost by controlling OPEX and reduce more and more CAPEX for internal IT investment.

Telco domain have learned many lessons from IT cloud approach. Many operators are now asking Telco vendors to start visualizing their current applications.
Decoupling hardware from software is a good approach for operators to gain flexibility and reduce vendor lock-in solutions. Accelerate innovation and adapt a cloud approach for the carrier Network will help telecom companies being more relaxed with the huge number of request coming from OTT players and intelligent customers.

Traditional telecom vendors have already started developing software instance for their applications to be run on top of a converged infrastructure based on virtualization and Intel x86 architecture. New vendors are now coming up to the market with a very competitive and  innovative solutions.


Virtualized applications are under improvement to match telecom requirements.

ETSI is doing a very good job on standardization and moving this market of Telecom vendor ahead with virtualization and cloud platforms.

ETSI reference architecture and the list of Pocs leaded by ETSI and done within different vendors prove clearly that we are moving to a multi layered architecture where Network functions will be virtualized and decoupling from the hardware layer.

Component as VNF  manager  is playing an important role on VNF instance life cycle management and resource scaling to satisfy the flexibility of resource allocation coming up with the Cloud.

NFV orchestrator refering to ETSI document is responsible for on-boarding network service NS, VNF forwarding Graph and VNF packages.

The NS (network service) is describing by NF Forwarding Graph of interconnected Network Functions(NF) and end points.It can be simply a Data service or internet access or a virtual private network.

Each telecom vendor try to implement his VNFM for his VNF components.
This VNFM is talking to VIM to execute the scaling up or down as requested according to the monitoring alarms.

Many work streams are under improvement. Telcom vendors did not find the balance to develop a unify set of API that allowed their VNFM to work for any type of VIM.

Even more, these VIM solutions are not ready in same cases to expose the right API for the VNF manager to handle the scale out or in of some Virtual machines.

The role of NFV orchestrator and how much he can control VIM ?
What relationship  with VNFM  ? Each vendor is developing his own VNFM.
Could we speak about one VNFM that deal with many VNF instances from different vendors for different applications ?

How we can extend the role of NFV orchestrator to deal with physical and external components?
How to incorporate with SDN controllers ?

How should Telcom vendors going forward on their startegy of virtualization? Should them focused for the moement on improving VIM capabilities to much telco requirements?

How the market will be shaped on the next years ?

We are seeing some vendors presenting some global MANO product. Such solutions are them  able to deal with different VNF instances and  differents VNF managers ?




Comments

Popular posts from this blog

What You Must Know Before Establishing a Recovery Plan ?

In today's rapidly evolving digital landscape, organizations are increasingly adopting the zero trust model, primarily due to the expanding attack surface that leaves critical systems and data exposed. This shift is also fueled by the heightened sophistication of cyber-attacks, which have become more complex and harder to detect, surpassing traditional security measures. Additionally, the existing operating models within organizations are often inconsistent, typically characterized by distributed and siloed environments.    This fragmentation creates vulnerabilities and makes it challenging to implement uniform security protocols. The zero trust model addresses these challenges by assuming that threats exist both inside and outside the network, necessitating continuous verification of all users and devices. Its adoption represents a proactive stance in the ongoing battle against cyber threats, ensuring a more robust and resilient organizational security posture. The Evolution ...

A comprehensive guide to ransomware distribution in VMware environments

In a virtualized on-premises environment based on VMware, ransomware distribution scenarios can be somewhat unique due to the nature of virtualization technology. However, many of the traditional attack vectors still apply. Here are some ransomware distribution scenarios specific to a VMware-based virtualized environment: Phishing Attacks Targeting Administrators: Administrators with access to the VMware environment might receive phishing emails. If they fall for these and their credentials are compromised, attackers can gain access to the virtualized environment. Exploiting Vulnerabilities in VMware Software: If VMware software or the underlying operating system is not kept up-to-date with security patches, vulnerabilities can be exploited by attackers to deliver ransomware into the virtualized environment. Compromised Remote Management Tools: Tools used for remote management of the virtualized environment, such as vSphere, can be a target. If these tools are compromised, attackers ca...

Edge Computing Demystified Book

After a while I'm back and pleased  to share in this post my first book around Edge computing Technologies. Edge computing has been a very hot and interesting topic nowadays for communication service provider and Enterprise so far. Augmented Reality / Virtual Reality, Smart cities, Healthcare, industrial IoT and many others use cases require a change in the way we operate and host application in the cloud.  IA, Big Data and analytics are often used today to understand the behavior of the customer and even the health of services. Real-time and high throughput demand are the characteristic of the new business services. Edge computing technology promises to resolve different challenges and brings compute, storage and bandwidth close to the data source. I tried in ‘the Edge Computing Demystified’ book to explain Edge computing technology referring to different use cases from communication service provider and enterprise industry. I h...