• + 1 (844) 772-7527

Posts By :

Amit Gandhi

How Parkar NexGen Platform Is Changing the Way we Approach AIOps?

1200 628 Parkar Consulting & Labs

How Parkar NexGen Platform Is Changing the Way we Approach AIOps?

AIOps or Artificial Intelligence for IT operations, a term initially coined by Gartner, employs advanced analytics in the form of machine learning (ML) and Artificial Intelligence (AI) to automate operations in a way that enterprises can move forward towards their goals with agility and efficacy.

What it eventually does is bring about predictive outcomes that can lead to a faster root-cause analysis (RCA) and also speed up the meantime to repair (MTTR). The intelligent, actionable insights that AIOps offers helps enterprises attain a high level of automation as well as collaboration thereby helping them make huge savings in terms of resources and time.

AIOps bringing successful digital transformation

At Parkar, we understand the role of AIOps in bringing about a successful digital transformation where workloads and processes are handled with precision with lesser dependency on humans. There is so much riding on AIOps today, that we’ve curated a platform that gives enterprises greater agility and innovation to alleviate workload and create better user experiences.

There have been significant changes in distributed architectures, multi-cloud, containers, and microservices that have in turn increased the complexities of the IT infrastructure. The number of services and applications that rely on the infrastructure is large. Even the slightest changes to these services or applications can have a domino effect within the infrastructure to an extent that’s beyond the control of humans.

What we need to address this situation is a robust AIOps strategy to create real-time systems where context-rich data travels through the full application stack, thus curtailing noise and improving time to resolution through automation.

Parkar’s take on AIOps

The realms of data humans have to go through on a daily basis can be frustrating. We need good insights that can translate into data-driven decisions and help curtails costs by understanding hardware capabilities and factors that adversely impact cost savings.

Through a highly efficient NexGen platform, we also hope to eliminate the skills gap by ensuring better and easier access to data that helps experts focus on key decisions and improves the learning curve for new members.

We want businesses to effectively overcome customer frustration by addressing application slowdowns, particularly on busy, high transaction days. The rationale is to pull them out of fire fighting mode and give them a competitive edge in a thriving but aggressive IT environment.

The NexGen Platform

To address all the issues discussed above and to offer a world of benefits to customers, we created a platform that changed the business dynamics for ambitious enterprises.

It constantly captures important information that comes from various sources including operators’ experience and stores it for future reuse. It hugely relies on root cause analysis and algorithms to help organizations resolve incidents and perform smarter IT operations.

Parkar also depends on its proven track record of helping enterprises deploy smarter tools and solutions to monitor, integrate, perform and excel. The platform it offers delivers NexGen AIOps solutions with end-to-end capabilities in AIOps transformation through purpose-built Machine Learning algorithms. Unified alert management, root cause analysis, anomaly detection, and predictive capabilities are just a few of the many things the platform offers to aid organizations to map their digital transformation journey.

The platform is now helping organizations from different sectors including retail and healthcare, helping them work faster and smarter.

Case in point

A leading healthcare organization from the US known for its excellent services and quality care faced the challenge of data management. It has a large scale enterprise network it relies on for providing better services and creating pleasant user experiences. The expanding network brought along the challenge of monitoring and administering networks, managing traffic issues, and fixing application malfunctions.

The need for embracing emerging technologies was felt more than ever before since it was becoming increasingly difficult to monitor network segments while keeping a tab on traffic or application performance. A robust solution that could perfectly capture network operations data across the many application layers with relevant insights was immediately needed. Our platform was just what they needed.

Measures were immediately implemented to address issues and facilitate smoother data management. These were as follows:

  • Service and device attributes such as service name, service components, and topology were assigned to establish a correlation across service and infrastructure layers and enrich data.
  • Priorities were set based on business and service impact so as to help operators address issues based on the extent and gravity of the impact caused.
  • Automated service assurance through a model-driven approach was ensured.
  • Automatic noise reduction was achieved.
  • Patented algorithmic and machine learning techniques were leveraged to build algorithmic correlation through clusters of related alerts automatically. These helped identify unique situations without necessitating laborious development and time-intensive maintenance of rules, filters or inventory-based service maps.
  • The smart algorithms ensured efficient data processing since they can now expertly derive cognitive insights from raw data sets mitigating the risk of operator fatigue and maintenance issues and reducing metrics like the Mean Time to Detect and Mean Time to Repair by almost fifty percent.

What we achieved:

The figure below is a statistical representation of what we achieved for our customers within a short period of time. The numbers reflect the power and efficacy of our platform.

What we need is context-infused AIOps

We need to take important steps towards creating actionable, IT operational data with an AIOps strategy that functions at machine speed. Merely collecting data is not enough and what is actually needed is contextualizing it so as to enrich its quality and arrive at automated but dependable outcomes.

Fig: 5 Step towards achieving actionable AIOps insights

At Parkar, we address these needs as follows:

Data collection

Data is collected from various sources including agents, operators, devices, applications, and services based on the type of asset that needs to be assessed and monitored. The IT environment needs to be constantly observed for the same.

Data cleansing and preparation

This is achieved in stages and involves various aspects including data duplication, time synchronization, single data lake, etc. each playing a significant role in the process of cleaning and preparing data. No AIOps strategy will work unless the data is clean, precise and perfectly aligned with your objectives.

Data enrichment

It’s impossible to enrich data without contextualizing it as it gives additional insights and perspective to raw data. Meta-data applied to a device, service metrics, application or infrastructure makes data more useful and insightful.

Data analysis

Operations teams are inundated with data. This puts a huge burden on them and also escalates analysis costs that result from staffing and data storage. AIOps analyzes, segregates and consolidates data by means of machine learning.

Action

Context-rich data is always relevant and accurate facilitating better and fast decision making. It also helps organizations take automated actions to initiate changes, send notifications or make recommendations.

 

There is a seismic shift towards next-generation solutions including containerization, microservices, cloud, etc. and it’s hard to miss. It urges IT operations to revisit and recalibrate their monitoring and management tools and embrace an AIOps-enabled approach. It is the only way to close the gap between IT and business.

Says Padraig Byrne, Senior Director Analyst at Gartner, “IT operations are challenged by the rapid growth in data volumes generated by IT infrastructure and applications that must be captured, analyzed and acted on. Coupled with the reality that IT operations teams often work in disconnected silos, this makes it challenging to ensure that the most urgent incident at any given time is being addressed.”

Clearly, AIOps platforms are the answer to the perennial need for analyzing the deluge of data with respect to volume, variety, and velocity. It’s time enterprises embraced them with open arms.

In closing

Parkar has a prolific experience and capabilities to help enterprises with their long-term business goals. Our NexGen platform stands testimony to our constant endeavor to offer better and reliable solutions to all enterprises’ IT concerns. We strongly believe that the effect and impact of AIOps will be transformative. The question is- are you ready to adopt it?

Let us talk to assess your environment and discover a whole new world of possibilities.

The most important elements of AIOps

1600 900 Parkar Consulting & Labs

With increasing efficiency and sophistication, the IT environment is becoming extremely complex too. The recent shift to microservices and containers has further added to the already large number of components that go into a single application, which means the challenge is equally big when it comes to orchestrating all of them.

The ability of IT Ops teams to handle such complexities is fairly limited and hiring more resources to configure, deploy and manage them is not very cost-effective.

This is where Artificial Intelligence for IT Operations (AIOps) comes into play. None come close to AIOps when it comes to leveraging Big Data, data analytics, and machine learning to offer a high level of customization along with invaluable insights necessary to cater to modern infrastructure.

Here’s what you should know if you are contemplating moving towards AIOps.

Understanding AIOps

As automated tools entered the scene, IT Ops teams realized that despite improved efficiency these tools were incapable of making automated decisions based on data, and therefore required considerable manual effort even then.

AIOps presented a more refined way of integrating data analytics into IT Ops, supporting more scalable workflows aligned with organizational goals.

AIOps Platform technology Components

Use cases for AIOps

Anomaly detection – This is definitely the most basic one since you can trigger a remedial action only after detecting anomalies within data.

Causal analysis – Root cause analysis is required for issues to be resolved quickly and effectively. AIOps plays a pivotal role here.

Prediction – Automated predictions about the future can be made using AIOps powered tools. For instance, you can find out how and when user traffic can possibly change and then react to address it.

Alarm management – Intelligent remediation, closed-loop remediation, is kicked in without necessitating human intervention.

Drawing parallels between AIOps and DevOps

DevOps had brought about a cultural shift in organizations and in that sense AIOps is pretty similar in effect and impact. AIOps is helping enterprises discover holistic insights from connected and disparate data to bring about decision-automation to make them better and more agile.

It is important for enterprises to break free from traditional silos as data should be generated and used keeping the ‘observability’ aspect in mind for the entire company, not just one department.

Thanks to AIOps, typical IT Ops admins are now transitioning into the role of Site Reliability Engineers helping them utilize information more efficiently and tackle issues in a more effective manner.

While both AIOps and DevOps share the same goal of making organizations better and more productive, AIOps can make DevOps practices more effective by reducing the noise that gets in the way of productivity. For example, AIOps streamlines the alerts and notifications from various platforms so that it becomes easier for DevOps engineers to address them. It would be safe to assume that AIOps complements the goals of DevOps engineers and enterprises effortlessly.

AIOps and time management

No matter what the team size, organizations will always struggle with the most common issue of having too much to do in too little time.

Luckily, there’s a lot AIOps can do for you in this regard. From helping you create a machine learning model to processing data to make it flexible enough to accommodate new information, AIOps can be just the value add-on you need.

Those who have been using AIOps would know the role of a well-trained machine learning algorithm in attaining and maintaining the high quality of data. Also, ‘real-time’ is the buzz word here since most use cases require real-time data processing.

So for instance, if the use case in question is detecting anomalies, then it is important to get information quickly so that you can prevent a security breach. The same applies for all use cases where the rationale is to get to a problem and resolve it in the fastest possible manner.

High-quality data, therefore, remains extremely important and AIOps makes it possible despite the complexities. Enterprises understand the importance of data analysis in principle, but find it difficult to trust and rely on it. As indicated by KPMG’s survey, 67% of CEOs agreed to have ignored the insights offered by computer-driven models or data analysis largely because they were not in line with their own thinking or experience.

The growing popularity of AIOps

Having data is one thing, and being able to be able to use it effectively is another. While machine learning holds a lot of promise, organizations need to employ resilient applications and stronger automation platforms.

MarketsandMarkets predicts a 34% combined annual growth rate for AIOps platforms giving a sneak peek into its rising demand. The fact that AIOps helps businesses be more flexible and responsive without putting a burden on resources is fast making it a must-have in this highly digitized era.

Getting started with AIOps

As enterprises transition towards a state of enlightenment with respect to the incredible benefits of AIOps, the question that needs to be addressed is how to embrace it in a way that it aligns with your business needs. Here are a few things that should help you:

Understand the basics of artificial intelligence and machine learning so that you are better equipped to adopt it.

Identify the most time-consuming tasks that your people undertake and how AIOps intervention would help to alleviate this load. Particularly look for repetitive tasks that could be effectively dealt with automation.

Avoid taking on too many things at once. Start small and begin with high-priority tasks. Once you get good feedback, assess how this technology can be best leveraged to address other areas and tasks.

Employ AIOps for all kinds of data. No doubt this may take longer than you thought but you need to look at the bigger picture. Also, look at the metrics you want to evaluate and the parameters you want to define your success on. The rationale is to ensure that your efforts are aligned perfectly with your organizational objectives.

From the adoption and maturity perspective

IT leaders are keen on automating arduous tasks within incidents while bringing down costs of alerts which can be significant. Service disruptions and downtime costs have been major factors of concern for most organizations.

IT organizations can vary in their objectives when it comes to AIOps adoption but what they are looking for in general is overall visibility into their systems to get a better handle on operational efficiency and the production environment.

Let us look at a five-stage maturity model that can help organizations gauge where they stand in terms of their monitoring and automation journey.

Source: ScienceLogic

AIOps is for those who have long-term goals and perceive it as the change that is needed to drive modern applications using microservices. It will ensure a fluid flow of information and rather than merely improving processes may even change them to match the current perspectives and architectures of organizations.

They need to rethink how they are going to perceive the full stack rather than seeing it only from an application perspective or the perspective of a cloud team or architecture team. This is particularly important for applications that are built using microservices. Enterprises need to understand what the infrastructure does at the app layer by retooling the capabilities for operations thereby providing necessary insights to app developers with the right flow of data.

All you need is a willingness to look at it without prejudice and think of the myriad ways it can help augment your business goals.

In closing

Although AIOps is witnessing early adoption by enterprises, there are enterprises that are still unsure about the hype surrounding it and are wondering if it’s indeed wise to go the AIOps way. AIOps, however, is perhaps the only way to unlock your full potential. For more on AIOps and to leverage it perfectly for your organization, let’s talk and assess your IT operations to truly automate and transform your business.

Right Strategies for Microservices Deployment

974 608 Parkar Consulting & Labs

Microservices architecture has become very popular in the last few years as it provides high-level software scalability. Although organizations embrace this architecture pattern, many still struggle with creating a strategy that can overcome some of the major challenges such as decomposing it to the microservices-based application.

At Parkar Consulting & Labs, we help our clients deploy microservices application to reduce operational costs and have high availability of the services. One such success story is of the largest telecom company in the US, where we successfully did a RESTful microservices based deployment.

In this blog, we will share some of the most popular microservices deployment strategies and look at how organizations can leverage it to attain higher agility, efficiency, flexibility, and scalability.

Microservices Deployment challenges

Deployment of monolithic applications implies that you run several identical copies of a single, usually large application. This is mostly done by provisioning N servers, be it physical or virtual, and running the application’s M instances on each one. While this looks pretty straightforward, more often than not, it isn’t. However, it is far easier than deploying microservices applications.

If you are planning to deploy a microservices application, then you must be familiar with a variety of frameworks and languages these services are written in. This is also one of the biggest challenges since each one of these services has its specific deployment, resource requirements, scaling, and monitoring requirements. Add to it, deploying services has to be quick, reliable, and cost-effective!

The good news is that several microservices deployment patterns can be easily scaled to handle a huge volume of requests from various integrated components. Read this blog to find out which one suits your organization the best and make the deployme\

Microservices Deployment Strategies

1. Multiple Service Instances per Host (Physical or VM)

Perhaps the most traditional approach to deploying an application is the Multiple Service Instances per Host pattern. In this pattern, software developers’ provision single or multiple physical or virtual hosts and run several service instances on each one. This pattern has few variants, including a variant for each service instance to be a process or run several service instances in the same process.

 

Benefits:

Relatively efficient resource usage since multiple service instances use the same server and its operating system.

Deployment of a service instance is also relatively fast since you just have to copy the service to a host and run it.

For instance, if the service is written in Java then you just have to copy the JAR or WAR file or the source code if it is written in Node.js or Ruby.

Starting service on this pattern is also quick since there is no overhead. In case the service has its process, you can just start it else you can also dynamically deploy into the container or restart it if the service is one of many instances running in the same container process or process group.

Challenges:

  • Little or complete lack of control on service instances unless each instance is a separate process. There is no way you can limit the resources each instance utilizes. This can significantly consume the memory of the host.
  • Lack of isolation if several service instances run in the same process. This often results in one misbehaving service interrupting other services in the same process.
  • Higher risks of errors while deployment since the operations team that deploy it needs to know the minutest of details of the services. Therefore, information exchange between the development team and the operations is a must for removing all the complexity.

2. Service Instance Per Host (Physical or VM)

Service Instance per Host Pattern is another way to deploy microservices. This allows you to run each instance separately on its host. This has two specializations: Service Instance per Virtual Machine and Service Instance per Container.

Service Instance per Virtual Machine Pattern allows you to package each service as a virtual machine (VM) images like Amazon EC2 AMI. Each instance is a VM that is run using that VM image. One of the popular apps using this pattern is Netflix for its video streaming service. To build your own VMs, you can configure a continuous integration server like Jenkins or use packer.io

Benefits 

One of the biggest benefits of using Service Instance per Virtual Machine pattern is that it uses limited memory and cannot steal resources from different services since it runs in isolation.

It allows you to leverage mature cloud infrastructure such as AWS to take advantage of load balancing and auto-scaling.

It seals your service’s implementation technology since the service becomes a black box once it has been packaged as a VM. It makes deployment a lot simpler and reliable.

Challenges

  • Since VMs usually come in fixed sizes in a typical public IaaS, it is possible that it is not completely utilized. Less efficient resource utilization also ultimately leads to a higher cost of deployment since IaaS providers generally charge for VMs irrespective of whether they are idle or busy.
  • Deployment of the latest version is generally slow. This is because VM images are slow to create and instantiate due to their size. This drawback can often be overcome by using lightweight VMs.
  • Unless you don’t use tools to build and manage the VMs, Service Instance per Virtual Machine pattern can often be time-consuming for you and your team. This is usually a tedious process, but the good news is that the issue can be resolved by using various solutions such as Box fuse.

3. Service Instance per Container

In this pattern, each service instance operates in its respective container, which is a virtualization mechanism at the operating system level. Some of the popular container technologies are Docker and Solaris Zones.

For using this pattern, you need to package your service as a filesystem image comprising the applications and libraries needed to execute the service, popularly known as a container image. Once the service is packaged as a container image, you then need to launch one or more containers and can run several containers on a physical or virtual host. To manage multiple containers many developers like to use cluster managers such as Kubernetes or Marathon.

Benefits: 

Like Service Instance per Virtual Machine, this pattern also works in isolation. It allows you to track how many resources are being used by each container. One of the biggest advantages over VMs is that containers are lightweight and very fast to build. Since there is no OS boot mechanism, containers can start quickly.

Challenges:

Despite rapidly maturing infrastructure, Service Instance per Container Pattern is still behind the VMs infrastructure and is not as secure as VMs since they share the kernel of the host OS.

Like VMs, you are responsible for all the heavy lifting of administering the container images. You also have to administer the container infrastructure and probably the VM infrastructure if you do not have a hosted container solution such as Amazon EC2 Container Service (ECS).

Also, since most of the containers are deployed on an infrastructure that is priced per VM, it results in extra deployment cost and over-provisioning of VMs to cater to an unexpected spike in the load.

4. Server-less Deployment

Server-less deployment technology is another strategy for micro-services deployment and it supports Java, Node.js, and Python services. AWS Lambda is a popular technology used by developers around the world. In this pattern, you need to package the service as a ZIP file and upload it to the Lambda function, which is a stateless service. You can also provide metadata which has the name of the function to be invoked when handling a request. The Lambda function automatically runs sufficient micro-services instances to handle requests. You are simply billed for each request based on the time taken and the memory consumed.

Benefits

The biggest advantage of server-less deployment is the pricing since you will only be charged based on the work your server performs.

It frees you from any aspect of the IT infrastructure such as VMs, containers, etc., giving you more time to focus on the development of the application.

Challenges

The biggest challenge of server-less deployment is that it cannot be used for long-running services. All requests have to be completed within 300 seconds.

Also, your services have to be stateless since the Lambda function might run a different instance for each request.

Services need to be in one of the supported languages and must launch quickly else it might time out and terminate.

Closing thoughts 

Deploying a micro-services application can be quite overwhelming without the right strategy. Since these services are written in a variety of frameworks and languages, each has its deployment, scaling and administering requirements. Therefore, knowing which pattern will suit your organization the best is absolutely necessary. We, at Parkar Consulting & Labs, have worked with scores of trusted customers to migrate their legacy monolithic applications to server-less architecture using Platform as a Service. The Parkar platform orchestrates the deployment and end-to-end management of the micro-services.

Greener Computing with Serverless Architecture

974 608 Parkar Consulting & Labs

Serverless computing is not new anymore and offers a world of benefits to developers and users alike. It’s an architecture and execution model where the developer gets the freedom and flexibility to build an application not having to worry about server infrastructure requirements. So while developers focus on writing application business logic, the operations engineers look into other nitty-gritty such as upgrades, scaling, server provisioning, etc. What began as Infrastructure-as-a-Service (IAAS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) is rapidly changing into a Function-as-a-Service (FaaS) where users can explore and experiment without the limitations of traditional servers. What they get to focus on is hard-core development of products and pay only for the actual compute time and resources used instead of the total uptime.

How it works

Serverless computing is typically event-driven computing where a particular event leads to execution of a function. The function runs for about five minutes before getting discarded and everything that’s needed for it to run is provided by the public cloud infrastructure. Although this sounds simple, the fact that a function can, in turn, trigger more functions makes the entire process more complex. While from a traditional developer perspective, this could be perceived as zero control on aspects pertaining to server, modern developers are more than happy with serverless computing or FaaS. Today, all large cloud vendors offer serverless offerings that include BaaS (Backend as a Service) and FaaS products. As such the organization or individual that owns the system does not have to buy, rent or supply servers for the back-end code to work on.

Fig: Serverless Architecture (source: G2crowd))

 

Broad benefits

Green computing – A recent survey indicates that more than 75% of users will be on a serverless architecture in the next 18 months. According to Forbes, “typical servers in business and enterprise data centers deliver between 5 and 15 percent of their maximum computing output on average over the course of the year.” Also, there is a growing demand for larger data centers which would imply more physical resources and associated energy requirements. In typical ‘server’ settings, servers remain powered up even though they may be idle for long durations. Needless to say, this has a huge impact on the environment.

No doubt cloud infrastructure did help to a great extent in reducing this impact by providing servers on demand. However, this has adversely affected the situation since servers are left around without proper capacity management. By trying to make capacity decisions to make applications more sustainable to last for a longer period, enterprises end up being over-cautious and over-provision.

Luckily, the serverless architecture addresses this issue effectively and vendors provide enough capacity to manage the needs of customers in real-time thereby enabling a better use and management of resources across data centers.

To cite an instance, Parkar was responsible for evaluating and optimizing the ETL jobs for one of the largest telecom companies in the US. The rationale was to help reduce operational and maintenance costs and make it more efficient. It was observed that they were utilizing a third-party ETL product and expensive compute cycles on a server that was running 24×7 to perform these processes. By leveraging serverless architecture, the jobs were transformed into a utility-based model and while executing function runs, the cost was paid only for those ‘run instances’.

The outcome? Humungous savings, a happy customer and astounding results:

 

Lower costs – The easiest and perhaps the most effective way of offloading IT overhead is to go serverless. There is no investment towards server maintenance costs and the only time you pay is when you run your code. This also reduces operational costs. Besides, you save big on cloud administration cost and the cost of managing associated teams.

Fast deployment – There is a huge rise in developer productivity since they can now build, test and release in a highly agile environment. They don’t have to worry about the readiness of the infrastructure or when other elements pertaining to it are ready to be rolled out. An effort is being made by cloud service providers to provide a development environment that’s standard for all. A classic case in point was the announcement of AWS Lambda supporting C# in 2016. Although standards are known to impede vendor innovation, they also indicate a healthy inclination to embrace serverless architecture.

Reduced time to market – When faced with tight deadlines, serverless computing gives developers the advantage of running multiple versions of code simultaneously. This helps transform ideas into reality in the most effective manner. For instance, if they had to develop functionality that helps mobile users check their credit score as part of the mobile banking app, they would require several days before they could actually develop, test and deliver using traditional cloud IaaS models like AWS EC2. On the contrary, event-driven serverless computing with AWS Lambda could help them build the same functionality in just a few hours. In just a few clicks, they could develop functionality that’s foolproof, checked, flexible and scalable.

Built-in scaling – Built-in scalability is a huge advantage and enterprises never have to worry about over or under-provisioning while scaling policies. All you need to do is pay for the actual usage, and the serverless architecture infrastructure will expand or shrink as required.

Disaster recovery – Being a pay-per-use model, failover infrastructure comes as part of the CSP portfolio. Setting it up in paired regions of the geography in question is no big deal and comes at fraction of the cost of traditional computing in server settings. This facilitates a seamless switchover ensuring that the recovery time is virtually zero.

The scope

Experts continue to deliberate on the pros and cons of going serverless. As mentioned earlier, loss of control over infrastructure has been a major concern as the earlier settings allowed developers to customize or optimize the infrastructure as needed. Besides, some have voiced concern about security considering that multiple customers share the same serverless architecture. Measures are being implemented by vendors to address this issue by providing serverless offerings in a virtual private network. Also on offer is cloud portability to enable smooth transitioning from serverless offerings of one vendor to another. Cloud service providers are also involved in vulnerability scanning and penetration tests on infrastructure to help iron out compliance issues.

According to the Cloud Native Computing Foundation (CNCF), “there is a need for quality documentation, best practices, and more importantly, tools and utilities. Mostly, there is a need to bring different players together under the same roof to drive innovation through collaboration.”

Serverless adoption advances towards a point of maturity

As with the containers market, the current proliferation of open-source serverless frameworks should decrease and converge in the coming months. The consolidation is a major indication of how serverless adoption is maturing with time. Everyone was in awe of Amazon Web Services’ Lambda and before we knew all the big players were soon jumping onto the serverless bandwagon. Last year, Google along with 50 other companies including Pivotal, IBM, Red Hat and SAP introduced Knative the open-source network for cloud-native applications on Kubernetes. It was soon touted as developer-friendly software. Through its essential but reusable set of components, it came as a whiff of fresh air for those who were struggling with the difficult aspects of building, deploying and managing applications. The all-in-one orchestration platform was just what was needed for integrated teams to operate together. In times to come, this orchestration maturity will reach a whole new level, unleashing new possibilities for larger and more complex applications.

The emergence of sophisticated testing tools

When it comes to serverless applications, testing practices get even more complex. Serverless architecture brings together separate and distributed services, both tested independently as well as in combination to check the effects of their interaction with each service. In addition to this, it depends on cloud services and event-driven workflows that cannot be imitated easily for local testing. To address these testing challenges that are very different from those faced by conventional testing methods of a monolith approach, integration testing has emerged. This is just the beginning of an era of more sophisticated frameworks and tools that will change the game in a radical way.

From the climate perspective

4.5% of global electricity consumption in 2025 will be attributed to data centers, thus iterating the need for looking at ways to curtain consumption. When we think of big corporations like Microsoft, we can only imagine the scale at which they run their data centers.

They are focussing on the following areas to minimize environmental impact:

Operational efficiency – It leverages the power of multi-tenancy to increase hosting density (number of applications per server) to save on servers and related hardware, cooling, etc. As per a study by Microsoft, increasing load on a server from 10% to 40% increases energy requirements by a factor of 1.7. Increased server utilization in the cloud thus lowers the consumption of computing power.

Hardware efficiency – Microsoft in a bid to reduce power consumption can spend lavishly on making amends to design of server hardware. Many of its innovations are therefore given out in the form of open-source designs.

Infrastructure efficiency – Data centers are critically evaluating their Power Usage Effectiveness or PUE factor. While a value of 1 would mean entire energy is used only for computing and not for lighting, cooling, etc., large cloud providers need to work on improving their PUE factor. Microsoft, for instance, achieves an average of 1.25 for all their new Azure data centers.

Utilizing renewable energies

While Google is already operating with 100% renewable energy, Microsoft is inching its way to achieving its target of 60% renewables by 2019 and 70% by 2023.

In closing

The learning curve is steep when it comes to taking the FaaS journey. As Winston Churchill rightly said, “The farther back you can look, the farther forward you are likely to see.” The underlying network still has a long way to go before it matures into a solution that’s devoid of setbacks and complications. Until then, we need to focus on the myriad benefits of the operational environment it creates and how it’s changing the tide for developers worldwide ensuring a smoother sail towards their goals. Going serverless is certainly not going to be a cakewalk especially when traditional server settings dominate our systems. Though it needs to be looked at as an approach that should be embraced with open arms by developers, team leads, project managers and all concerned, being fully aware of what going serverless entails. What modern enterprises need is a well-planned architecture or else the initiative can quickly change into chaos.

To make your serverless initiatives work, call us today. Parkar Consulting is committed to helping you thrive on a serverless network and ensuring minimal impact on the environment through all its endeavors. We can help your development teams perform optimally keeping energy consumption to a bare minimum while enjoying increased efficiency. Call us today and we will tell you how.

Orchestrated PaaS: A Product-Centric Approach

974 608 Parkar Consulting & Labs

“We must indeed, all hang together, or most assuredly we shall all hang separately.” – Benjamin Franklin

Franklin reportedly issued this as a warning during the Revolutionary War of 1776.

Fast forward to the technological war room of the 21st Century, unity is still the key to victory, especially when it comes to the ‘Battle of the Clouds’!

Cloud technology is at chaotic crossroads. At the time when cloud adoption soared across industries, a plethora of companies jumped on the bandwagon with a short-sighted strategic approach. They quickly realized that the future of cloud computing does not lie in the implementation of multiple cloud resources but in the holistic adoption of cloud in all its forms. In other words, the value of cloud services is increasing exponentially as it functions as a single, cohesive, and orchestrating unit.

What is Cloud Orchestration?

Let’s take the example of a mechanical watch. How does it work? It involves the functioning of numerous interconnected gears that work in perfect harmony to measure the passage of time. The end result looks something like this:

Source

The number of gears (or jewels) is directly proportional to the price of the watch. Why? Simply because every additional gear substantially improves the accuracy of the result. Fascinating stuff, isn’t it?

This is how cloud orchestration works. If you consider every independent cloud deployment and functionality as a gear, then orchestration is the process of bringing together every moving cloud part and tying them up in a single and monolithic workflow.

Benefits of disparate cloud resources working in tandem include the likes of high availability, scaling, failure recovery, and dependency management. DevOps can boost the speed of service delivery while reducing costs and eliminating potential errors in provisioning, scaling, and other processes.

In this case, the end result looks like this:

Source

And this is the groundwork on which our story is based on.

PaaS – The Realm Beyond Cloud Orchestration

Platform as a Service or PaaS is a type of service deployment in the cloud that takes a product-centric approach and goes beyond orchestration. It aims to meet the basic infrastructure and platform needs of developers to deploy applications. They do not need to handle mundane tasks. Instead, they can use APIs to develop, test, and deploy industry solutions. For this purpose, they are generally hosted and deployed as web-based application-development platforms. This gives developers the flexibility to provide end-to-end or partial online development environments.

While it does help to orchestrate containers, the main function of Orchestrated PaaS lies in setting up choreographed workflows. This makes it relevant for software solutions that want to primarily focus on the development cycle of the software and the monetization of new applications. By deploying agile tools and techniques, companies can accelerate application development, reduce compartmentalization, increase collaboration, and boost scalability.

Apart from these, the primary reasons to implement an orchestrated PaaS strategy are:

  • Accelerated application development
  • Quicker deployment
  • Faster go to market
  • Organization-wide collaboration
  • Hybrid cloud flexibility
  • Enterprise-grade security

There are two basic types of Platform as a service deployment – ‘Service Orchestration’, and ‘Container Orchestration’.

Service Orchestration

These include public PaaS solutions and functions as a bundled package for individuals, startups, and small teams. Being public in nature, it does come with certain limitations in terms of the depth of integration it offers. Hence, it poses to be a difficult choice for organizations that are looking for company-wide standardization.

But, in situations where quick prototyping and deployment is needed with an ability to go past compliances, public PaaS solutions can come to the rescue.

Container Orchestration

Container orchestration includes private SaaS solutions that function as a closed system. It does not focus on where the product or application is running, rather concentrates on simply the running of the resulting service. For instance, the end result can be loading certain web pages without any latency.

But modern Enterprise IT has gradually brought about a change where they are concerned with the scale of application and not just the underlying system.

The Coveted PaaS Model

To better understand how a PaaS framework can serve certain business scenarios, here are certain cases of this model.

  • A single vendor owns every platform or application that is contained in the PaaS model.
  • Applications need to be developed from scratch and leverage a formalized programming model.
  • The services involved in the solution are common and stable.
  • All the roles of containers used in the business model are stable.
  • No industry-specific service or application is being used in the platform, and it is simple and easy to design and manage.

The whole idea of PaaS is to empower developers by helping them deliver value through their product without worrying about building a dedicated IT infrastructure.

Best Practices and Patterns of Orchestrated PaaS

The manner in which a PaaS system can be fundamentally orchestrated depends on its solution-specific application scenario, business model, and enterprise architecture. Based on this, integration patterns with other leading industry solutions can also vary. Various patterns in which PaaS can be implemented include:

  • Embedded PaaS

This is implemented within an industry solution and becomes a part of it. Examples of this include; a cloud-enabled integrated information framework. In such a system, only certain parts or functions of the whole system are deployed as PaaS solutions. The rest of the solution is not hosted on the cloud.

  • Value-added PaaS

Functions as ‘PaaS on an industry’ and includes industries that host value-added PaaS solutions that can be used by customers in tandem with their core industry offerings. Primary functions and infrastructures are maintained outside the cloud environment. Examples here include; a self-service telecommunications service delivery platform that is based on the cloud that empowers customers to quickly deploy value-added PaaS functionalities from the ground-up.

  • Bundled PaaS

The core function or solution of the industry is bundled together in the same PaaS environment. The end result is an industry-specific PaaS solution that empowers the entire business model of the company to function as an independent node in the ecosystem.

The World of Containers: Building Blocks of PaaS

In the elementary sense, containers are what have made PaaS possible in the first place. All the necessary code of a function can be bundled into a container, and the PaaS accordingly builds on to run and manage the application.

 

Although PaaS boosts the productivity of developers, they have little wiggle room. But, now, further technological development has made an autonomous existence of containers possible with leading software solutions, such as Docker, Kubernetes, and Red Hat OpenShift.

With these applications, developers can now easily define their app components and build container images. Apps can now run independently from platforms, paving the way for more flexible orchestration.

Software-Driven Container Orchestration

Here’s a close look at the various software that is making PaaS orchestration possible by functioning at the container level.

1. Docker

Docker is an open platform that is meant for developing, running, and delivering applications. It enables users to treat their infrastructure like a managed application. As a result, developers can quickly ship codes, test apps, and deploy them by reducing the time gap between writing and running codes.

Benefits of Docker for Paas Orchestration include:

  • Faster delivery of applications.
  • Easy application deployment and scaling.
  • Achieving higher density and running more workloads.
  • Eliminating environmental inconsistencies.
  • Empowering developer creativity.
  • Accelerating developer onboarding.

2. Kubernetes

Kubernetes is another popular container orchestration tool that works in tandem with additional tool sets for functions, such as container registry, discovery, networking, monitoring services, and storage management. Multiple containers can be grouped together and managed with a single entity to co-locate the main application.

Features of Kubernetes include:

  • Algorithmic container placement that selects a specific host for a specific container.
  • Container replication that makes sure that a specific number of container replicas are running simultaneously.
  • An auto-scaling feature that can autonomously tweak the number of running containers based on certain KPIs.
  • Resource utilization and system memory monitoring (CPU and RAM).

3. Red Hat OpenShift

This is a unique platform that is a combination of Dev and Ops tools that functions on top of Kubernetes. Its aim is to streamline application development and manage functions like deployment, scaling, and long-term lifecycle maintenance.

Various features of the tool include:

  • Single-step installation for Kubernetes applications.
  • Centralized admin control and performance optimization for Kubernetes operators.
  • Contains functions, such as built-in authentication and authorization, secrets management, auditing, logging, and integrated container registry.
  • Smart workflows, such as automated container build, built-in CI/CD, and application deployment.
  • Built-in service mesh for microservices.

In fact, Openshift has become the go-to platform to implement PaaS orchestration.

At Parkar, we recently came across a project where the client was looking to develop a next-gen platform that increased speed and incorporated

innovation in their existing technological ecosystem. Our developers used Openshift as the orchestrated container platform and significantly reduced the time to market.

The decision paid off with significant metrics and the following project results were realized:

Conclusion

It is safe to assume that successful cloud orchestration opens the door for a number of benefits for the entire cloud ecosystem. These include forced best practices, simplified optimization, unified automation, improved visibility and control, and business agility. The PaaS construct functions as a layered model to deliver specific applications and services. It also improves the end-result with abilities like rapid time-to-market, future-proofing, and investment protection to support all-round cloud-based digital transformation.

Application Containerization Assessment

974 608 Parkar Consulting & Labs

Containerization seems like the buzzword these days and I&O (Infrastructure and Operations) leaders globally are eagerly adopting container technology. As per Gartner, the containerization wave will sweep across organizations worldwide with 75% of them running containerized applications in production by 2022 as opposed to the current 30%. Having said that, the fact cannot be denied that the present container ecosystem is still in its nascent stage. There’s a lot to get out of containerized environments provided containerization is a good fit for your organization. Detailed assessment therefore becomes a mandate to ensure that you have a solid business case that makes the additional layer of complexity and cost incurred in deploying containers absolutely worth your efforts. It wouldn’t be wrong to say that running them in production, by far, seems like a steep learning curve that many are trying to comprehend.

The dilemma 

To containerize or not to containerize is one question that continues to plague the minds of many. While moving traditional monolithic workloads to the cloud seems like a great idea, organizations need to seriously ponder over whether moving the workload is indeed the right thing to do. Many are going by the ‘lift and shift’ the application into a virtual machine (VM) approach, but the pertinent question here is ‘does containerization help your case?’ When applied correctly, it will not only modernize legacy applications but also create new cloud-native ones that run consistently across the entire software development life cycle. What’s even better is that these new applications are both agile and scalable. While deploying containers in production environments, the I&O teams need to mitigate operational concerns pertaining to their availability, performance and integrity. At Parkar, we look at all your deployment challenges critically. We’ve identified the key elements that can help you decide how eligible your applications are for containerization.

Here’s a quick lowdown on the assessment. Take a look.

Now lets deep-dive into more details.

Is your platform a containerized version?

This should not be so difficult considering the fact that vendors have already taken care of that. Commonly used platforms such as Node.js, Drupal, Tomcat and Joomla have taken care of the nitty-gritties to ensure that the app you use offers scope for digital transformation and gets converted to be adapted effortlessly into a containerized environment. For starters, begin with an inventory of all internally-developed applications. Check if the software being used allows containerization. If yes, you can extract the application configuration, download its containerized version and Voila; you are good to go. The same configuration can be fine-tuned to run in that version and can be subsequently deployed in a shared cluster in a configuration that is even cheaper than its predecessor.

Do you have containerized versions of 3rd party apps?

With the vast majority of legacy apps being converted into containerized versions, third party vendors are also realizing the benefits of jumping onto the containerization bandwagon. When you choose to containerize instead of choosing VMs, you also eliminate the need of having OS and bearing its license fee. This leads to better cost management as you avoid paying for unnecessary stuff. As a result, vendors too are now offering containerized versions of their products. Commercial software is one of them. Classic case in point- Hitachi offering SAN management software on containers. This is a value-add to their existing full-server versions. Infrastructure servers deployed at data centers are good examples of application containerization. Talk to your vendors and they will tell you if they offer any. For all you know, the road to application containerization may be smoother than you think.

Do you have a stateless app?

When an application program does not save client data that gets generated in one session to be used for a future session even if it is for the same client, it is called as a stateless app. The configuration data so stored is often in the form of temporary cache instead of permanent data on the server. Typically, Tomcat tiers and many other web front ends are good examples of stateless apps where the role of tiers is merely to do processing. As you take away the stateless tires of any application, they automatically become eligible for containerization due to the flexibility they attain. Rather than being run at high density, they can now be containerized to facilitate simpler backups and configure changes to the app. While these are good targets, storage tools such as ceph, Portworx and Rex-Ray also make good candidates, except that they will require a lengthier process to containerize. Post the makeover, they become great targets.

Is your app part of a DevOps and CI/CD process?

If the answer is yes, then migrating to containers would be a cakewalk for you. All you need to do is package them in containers and integrate them with your servers. As you gradually gain confidence that the deployment has been well received and the app is working as desired, you can bring container orchestration platforms into the picture and enjoy a host of advantages, top on the list being resilience and efficiency. Companies have started realizing the benefits of app containerization and have started modifying their existing CI/CD pipeline so as to create a more efficient and robust infrastructure. Apart from the obvious benefits, containerization goes a long way in testing and deployment of new code and even retrieves the ones that are not performing well. For those who thrive on agile development, this feature is definitely a huge savior.

Are you using a pre-packaged app?

It’s easy to containerize an application if it is already packaged as a single binary or a JAR file since both are fairly flexible. What’s common about Java apps and JAR files is that both can be easily changed to their containerized versions apart from the fact that they carry their typical JRE environment into the container during the process. This ensures faster and simpler deployment and also gives users the freedom to run various versions of Java runtimes alongside on the same servers. This is possible purely because of the isolation that containers offer.

How secure is the environment?

A container-based application architecture comes with its own set of security requirements. Container security is a broad term and includes everything from the apps they hold to the infrastructure they depend on. The fact that containers share a kernel and don’t work in isolation makes it important to secure the entire container environment. The Linux 3.19 kernel for instance, has about 397 system calls for containers clearly indicating the size of the attack surface. A small breach in the security of a single one would in turn jeopardize the security of the entire kernel. Also, containers such as Docker containers have a symbiotic arrangement and designed to build upon each other. Security should be continuous and must gel well with enterprise security tools. It should also be in line with existing security policies that balance the networking and governance needs of containers. It is important to secure the containerized environment across the entire life cycle that includes but is not limited to development, deployment and the run phase. As a rule of thumb, products that offer whitelisting, behavioral monitoring and anomaly detection must be used to build security in the container pipeline. What you get is a container environment that can be scaled as required and completely trusted.

Resource Requirements

As opposed to running VMs that require more resources, containers occupy just a miniscule portion of the operating system and are therefore less resource-intensive. Several containers can be easily accommodated on a single server with ease. However, there will be edge cases where scaling of multiple containers may be necessary in order to replace a single VM which would also mean you could be saying goodbye to potential savings on resources. One VM would be equivalent to an entire computer, and if you were to divide its functions into 50 distinct services, you would actually be investing in not one but 50 partial copies of the operating system. Now that’s something you definitely need to consider before deciding if containerization is for you. You get it? Or we could go on and on with the number crunching.

Other considerations

There are several other considerations that determine if your apps are containerization-worthy. You need to take into account several factors such as storage, networking, monitoring, governance and life cycle management. Each has a vital role to play and can be a critical component in the decision-making process.

Ask the experts

Parkar recently undertook an application modernization project for a prominent healthcare company where it was tasked with the evaluation of multiple applications to check their readiness for containerization. We worked on one of the critical business applications and chalked out a roadmap to modernize and containerize it without compromising security and availability. We migrated the application to OpenShift platform with multiple containers for their frontend and backend layers. The application was scaled both horizontally and vertically.

Here’s what we achieved:

 

 

Summing up

Containerization comes with a world of benefits. From allowing you the convenience of putting several applications on a single server to support a unified DevOps culture, they give you the agility and power to perform better in a competitive environment. Since they run on an operating system that’s already booted up, they are fast and ideal for apps that require spun up and down every now and then. Being self-contained they are portable and can be easily moved between machines.

The modern organizations rely on DC/OS to host containerized apps. The reason being that this method consolidates your infrastructure into a single logical cluster that offers incredible benefits that include fast hosting, efficient handling of load balancing, and automatic networking of containers. It allows teams to estimate the resources required and help reduce the operating costs for existing applications.

If you wish to know if containerization is right for you and want to unleash greater efficiencies, contact us today.

Container orchestration with Red Hat OpenShift and Kubernetes

974 608 Parkar Consulting & Labs

 

“The difficulty lies not so much in developing new ideas as in escaping from old ones”

John Maynard Keynes.

Containers have taken the world by storm! Many companies have begun to show a fervent interest in this next evolution on Virtual Machines. With a plethora of container definitions out there, let me just attempt to give a layman understanding of the term. Container, in simple terms, is something that helps your software run consistently irrespective of the underlying host environment. It ensures the predictability and compatibility of your application across the diverse landscape of infrastructure.

To exemplify this, let us suppose you borrow a movie from your friend but it is not compatible with your PC because you don’t have the video player to display it. Here comes the role of a container, which will fill in the deficit caused by the PC.

We, at Parkar, have seen how container orchestration using Kubernetes and OpenShift  have immensely helped companies get on the fast-paced delivery and digital transformation journey. Let me attempt to break it down.

What really happens in production using containers?

Have you ever been to a symphony? An orchestra ? or a music concert? You must have seen many artists doing what they do best and in the center would be the maestro managing the entire show. Now imagine in a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. Modern applications, especially the ones based on microservice architecture, usually contain multiple services (run in separate processes) and each is run in a separate container. Thus a single application contains multiple container-services and their multiple instances.

Think of a scenario where a container/service instance goes down. Ideally, another instance would need to start and serve in its place. How would you manually be able to keep track and handle this lag?  Wouldn’t it be easier if this behavior was automated without any human intervention? This is exactly where container orchestration comes in and helps solve this problem.

Container orchestration is an automated process of managing, scheduling and balancing the workload on the container or containers for the applications. Container orchestration tools help in managing the containerized workloads and services.

The most widely used container orchestration platforms are Open Source Kubernetes, Docker Swarm, and Enterprise Red Hat OpenShift. Kubernetes is the most popular orchestration tool out there and has kind of become a name in itself !

Our recent project involved us working on a PAAS platform where we built a Next Generation application mobilization platform involving OpenShift and Kubernetes. Using this platform, an API marketplace was created which facilitated access to data from legacy applications and enabled the creation of new applications in a rapid and scalable manner.

But Is Kubernetes enough in itself?

Kubernetes had become a brand name, and perhaps is the most efficient of them all. However,  Kubernetes is just a container orchestration tool, which also needs to be supplemented with an additional toolset for container registry, discovery, networking, monitoring services, and storage management. Kubernetes also needs to depend on other service mesh tools like Istio for service to service communication. There are several architectural and integration considerations that have to be met to make all of this work together. Building container-based applications require even more integration work with middleware, frameworks, databases, and CI/CD tools. To augment the base Kubernetes, Red Hat OpenShift combines all these auxiliaries into a single platform, and thus presents a more complete solution to DevOps.

Let us understand what Red Hat Openshift Platform is, shall we?

Red Hat OpenShift platform is a combination of Dev and Ops tools on top of Kubernetes to streamline application development, deployment, scaling, and long-term lifecycle maintenance for small and large teams in a consistent manner. In other words, it is a ‘Kubernetes Platforms as a Service’.

What’s the big advantage of it?  Red Hat OpenShift brings over the bare Kubernetes, meaning it lets teams start building, developing, deploying easily and quickly, in infrastructure agnostic way, i.e. whether in the cloud or on-premises.

Image Source: https://www.openshift.com/learn/what-is-openshift

Parkar’s clients have seen tremendous benefits with containers. The benefits have been across application deployment frequency, time to deploy and number of deployments overall. And with these benefits the customers could roll-out features much faster.

Fig : Benefits realized by Parkar’s Clients using Containers

What more does Red Hat OpenShift offer?

A lot, I would say! Here are some of the top benefits:

Full-stack automated operations

Red Hat OpenShift offers automated installation, upgrades, and life cycle management for every part of your container stack.

  • It provides a single-step installation for Kubernetes applications.
  • It gives centralized administrative control for over-the-air updates and performance tuning with Kubernetes operators.
  • It offers continuous security through built-in authentication and authorization, secrets management, auditing, logging, and integrated container registry.

Developer Productivity

Red Hat OpenShift supports well-known developer tools and helps streamline your build and delivery workflows.

  • It has code in production-like environments with developer self-service and Red Hat Code Ready Workspaces.
  • It extends support for your choice of languages, databases, frameworks, and tools.
  • It offers streamlined workflows like automated container builds, built-in CI/CD, and application deployment.

Built-in service mesh for microservices

Microservices architectures can cause communication between the services to become complex to encode and manage. The Red Hat OpenShift service mesh abstracts the logic of interservice communication into a dedicated infrastructure layer, so communication is more efficient and distributed applications are more resilient.

  • It’s installed and updated via Kubernetes operators.
  • It incorporates Istio service mesh, Jaeger (for tracing), and Kiali (for visibility) on a security-focused, enterprise platform.
  • It frees developers from being responsible for service-to-service communication.

A unified Kubernetes experience on any infrastructure !

Red Hat OpenShift is a consistent Kubernetes experience for on-premises and cloud-based deployments. Assisted by unified operations and admin controls, all the workload is decoupled from infrastructure, thereby consuming less time spent on system maintenance and more on building critical services.

In conclusion, the Red Hat OpenShift platform does simplify quite a lot of things for IT teams in terms of abstraction, efficiency, automation, and overall productivity. Albert Einstein had once quoted: ‘Necessity is the mother of invention’; and that quote rings true throughout this article, which is to say at every step there are tools to ease operations, and to handle it most efficiently.

Parkar can help your organization in reaching your business goals. Contact us for more information about how we can make a difference.

References:

  1. https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
  2. https://www.openshift.com/learn/what-is-openshift

 

Top 5 Things to remember when defining a Microservices Architecture

1280 800 Parkar Consulting & Labs

“Nothing exists except atoms and empty space; everything else is opinion.” 

– Democritus

In other words, what’s the lowest unit of work that programmers can deal with easily without making the whole application one big inseparable monolith?

In order to stay ahead of the curve, organizations must transform themselves at different levels. Leadership transformation and digital transformation are examples of some of those initiatives.  Our focus here will be around Digital Transformation and how to measure its effectiveness.

CIOs now have a critical KRA around ‘Time to Market’, ‘Faster Delivery’ and ‘Business Availability’. Can a Microservices architecture underpin these metrics to create a foundation that helps CIOs better them? Most of the CIOs have a legacy platform or monolithic application to deal with. Among many, we need to look at transforming the monolithic application into independent, easily manageable services.

While there is an urgency, it is important to firstly understand the what, the why and how of Microservices Architecture.

Having worked in diverse industries helping midsize to Fortune 500 companies build their empires, at Parkar, we have seen how well-designed microservices have helped companies get on the fast-paced delivery and digital transformation journey.

In his book, “Business @the Speed of Thought: Succeeding in the digital economy”, Gates stresses the need for leaders to view technology not as overhead but as a strategic asset. If you haven’t yet jumped on to the Microservices journey, you will be lagging way behind your competitors and before you know it would be too late.

Today’s world is all about focusing on managing the core business. Your IT services and applications should scale to the need of speed of the business. With legacy applications, it’s going to be tougher. You need someone who has done it before to navigate through the complexities.

Microservices architecture is a methodology where the application is divided into set of smaller services, known as Microservices. It is based on proven software engineering concepts around agility, APIs, and Continuous integration (CI) continuous deployment (CD).

In practical sense, Microservices are SOA done right.

In various Parkar engagements, the Microservices have been implemented using different languages or technologies. During implementation, we interface the back-end services through an API gateway which provides a unified interface to front-end client applications.

For one of Parker’s customer engagements, even advanced analytics was incorporated to gain better insight into data and predictive models were deployed using machine learning algorithms.

For scaling (instancing up and down), life-cycle management, and load balancing, we at Parkar, have done through orchestration and load balancing services such as Kubernetes.

Digital Patient Care sub-application built with Microservices

Parkar Consulting Lab had a recent engagement with a healthcare organization, where we implemented the Microservices stack using the right architectural principles.

In the services and applications that we enabled, there was a patient care application that queried patient history, verified any drug allergies and family medical history. The application consisted of several components including the Web Client, which implements the user interface, along with some backend services for checking patient records, maintaining history and related analytics & visual graphs.

The application consisted of a set of services. A cross-section of the application is as below:

Fig 1: Example of Microservice

Benefits of Microservices

Much has been said about the advantages of Microservices based architecture over monolithic architecture. In short, we can list the following as a set of deciding parameters which make the Microservices architecture attractive.

Fig2: Microservices value realization in some Parkar implementations

Doing it Right is Critical for the Digital Transformation Journey!

Choosing the right partner in such transformation project involving Microservices could be challenging. Understanding the business nuances and then architecting the solution with future in perspective needs a deeper understanding.

We’ve worked with scores of customers till now. And what we’ve learnt is that they all are poles apart from one another. Each will have its own parameters, and the level of urgency and degree of dependence on the services will differ too. There is a right architecture for every use case, scale and target consumer (B2B v/s B2C).

No matter which one you choose, it should help you reduce development and operating costs while giving you added functionality and power.

Without much ado, we’ll proceed towards the key recommendation from our success stories and learnings.

Top 5 things to remember while adopting Microservices architecture

 1. Defining logical domain boundaries clearly

This includes –

  • Domain data management (separate or shared Databases between one or more Microservices)

Fig 3:  Domain Data Model

  • Well defined interfaces for communication means that each Microservice does exactly what it is supposed to do and nothing else, so there is no overlapping of purpose or functionality across Microservices.

  • Each Bounded Context of the domain maps to exactly one business Microservice, which in other words means the Microservice is cohesive within itself but not across other Microservices.
  • Events propagation and communication methods within application and from outside application (HTTP/HTTPS or AMQP).
  • API Gateway for single point of interfacing for the clients. (API Gateway or Backend for Frontend pattern)

2. Security Architecture:

Security design and implementation at multiple levels: authentication, authorization, secrets management, secure communication.

3. Orchestration:

Achieving scalability sounds simple but is by no means a simple task. Given the complex set of activities going on in parallel, orchestrating them at scale needs to be a well thought out plan. Using the Microservices tools is also important. Our experience at Parkar shows that Kubernetes and Helm are common technologies that have performed well for achieving scalability. It may look complex, but here are key aspects one should look into:

  • Service registry and service discovery
  • IP whitelisting
  • Authentication and authorization
  • Response caching
  • Retry policies, circuit breaker
  • Quality of service management
  • Rate limiting and throttling
  • Load balancing
  • Logging and tracing
  • Managed orchestration

4. Monitoring and health checks

Each service needs to be healthy to process requests. And hence monitoring service health is a key consideration. In one of the instances at Parkar, we have seen that sometimes a service instance is incapable of handling requests, yet it was running. And when we debugged further, we found that the specific service had run out of database connections. The key learning for Parkar from this, is that when this occurs, the monitoring system should generate an alert. The load balancer or service registry should also be designed intelligently so as to not route requests to the failed service instance. And ideally, the request should be routed to working service instances after checking the necessary pre-conditions.

5. Deployment (DevOps and CI/CD practices and infrastructure)

Faster release cycles are one of the major advantages of Microservices architectures. But without a good CI/CD process, you won’t achieve the agility that Microservices promise.

For example, assume, in an organization, there are sub-teams working on different features of an application. In case of monolithic application, something goes wrong for, say sub-team B, the release candidate of the application gets broken and there is no production rollout.

Fig 4: Deployment in case of monolithic v/s Microservices

On the current application modernization project, Parkar team was tasked with reducing the application deployment time while improving the quality and ability to add new features rapidly. With Microservices, we were able to develop and deploy services rapidly and decouple the business functionalities to enable rapid deployment.

The key for us, at Parkar, has been following the Microservices philosophy, where there is not a long release train where every team has to get in line. The team that builds service “A” can release an update at any time, without waiting for changes in service “B” to be merged, tested, and deployed.

Now that we have looked at the key things to consider before you zero down on the Microservices architecture, there are certain things you must ask your team or your IT service implementation partner before you invest your money and time.

What are these questions?

  1. Where to start: Start point often becomes a major question for many seniors. Are you developing an application from scratch or are you transforming your legacy application into Microservices based architecture?
  2. How to Segregate: What are the logical domains you could segregate your application into? Methods such as strangulation strategy can often help to migrate in phases.
  3. How do you want to separate front-end and back-end services?
  4. What is your plan for deployment? Do we have the right environments develop, QA, staging and production? Is the deployment pipeline ready?
  5. Where to deploy: Whether on-premises, hybrid infrastructure, single public cloud or multi-cloud?

Whenever Parkar gets involved with any customer, answers to these questions has helped in a big way to crystalize the outlines of microservices architecture. It also helped plan for incremental migration from monolithic to microservices avatar for the customer application.

Conclusion

Every enterprise has certain roadmaps when it comes to charting its journey. Clearly, the Digital Transformation journey is a step of multi-step evolution and one of the key steps is moving to Microservices.

At Parkar, we have experienced that right Microservices architecture helped our customer organization in their roadmap of Legacy to Microservices migration or building a new service scalable, secure and quick to deploy.

It’s critical to ask the right questions (as mentioned above) and then follow through the key 5 things at the minimum, before you embark on the Microservices journey enroute to your digital transformation journey.

On this note, we shall let you ponder over the benefits and principles of Microservices architecture. Meanwhile, we at Parkar Consulting Lab, will gear up to bring you something more engaging from the world of Application Modernization and Digital Transformation. Stay tuned.

Next Phase of Digital Transformation with Machine Learning and Robotic Process Automation

1500 245 Parkar Consulting & Labs

You’ve launched an AI, ML, or RPA initiative (or initiatives.) You’re seeing results. But are your operations and business benefits in line with those of best-in-breed financial firms?

Digital Customer Experience (DCX) has become the watchword of next-generation customer interaction. On the customer-facing side, technologies such as social platforms, chatbots, human-machine interfaces like Alexa and Siri, and virtual reality (VR)/augmented reality (AR) are dramatically changing how financial services firms interact with customers (whether individuals or businesses). On the back technologies like AI, machine learning, and advanced analytics are providing financial executives with unprecedented insight into customer desires and behaviour.

In sum, these technologies have the potential to radically transform financial services firms. They can spawn new lines of business, new products and new partner channels.

 Ansul Srivastav, CIO and Digital Officer with Union Insurance will be sharing his experience   on how he helps transform business lines (Life, Health and P&C) with Digital, Cloud, Mobility   and Analytics and strategy adoption for some key transformations like Machine Learning and   Robotic Process Automation

 Know more about Anshul’s view on Digital Transformation by reading his recent blog   “Anatomy of Fintechs that’s redefining Financial services business models” on LinkedIn

You’ll learn:

• The top use cases for AI, ML, and RPA

• How to define effective roles for IT and business in automation and digital Transformation

• How to implement the right data management and governance for you AI/ML/RPA initiatives

Parkar Consulting and Labs

Parkar Consulting & Labs is a boutique Technology Consulting firm, born out of Chicago and affiliated with 1871Chicago.

Our practice consists of a strong pool of over 170 consultants specializing in delivering integrated engineering solutions in Product & Data Engineering to clients across Industry verticals of Telecom, Retail, Information Technology & Services and Healthcare.

The spectrum is structured across the following: End-to-End Product Development & Lifecycle Management, Database Management, Data Warehouse & Business Intelligence, Big data & Analytics; Cloud- Assessment, Strategy, Migrations, Security—and Managed Services. Our Center of Excellence (CoE) offers a dedicated team of experienced domain experts in Product Management, Testing, DevOps, Automation, Oracle & AWS.

Backed by a robust community at 1871, our R&D team at Parkar Labs is vested in niche technologies such as Blockchain, Machine Learning, Artificial Intelligence and Security engineering to build cutting- edge solutions in creating new offerings that add value to a larger solution set.

 

serverless-cloud-computing

Serverless Cloud Computing: current trends, implementation and architectures

1200 540 Parkar Consulting & Labs

It seems like everyone’s talking about serverless computing these days. But wait…when did we all stop using servers? In case you were wondering, servers are still necessary — serverless just means the average cloud user doesn’t have to worry about them anymore. You can now code freely without having to write an endless number of scripts to manage server provisioning.

 

serverless-trends

The interest over time for Serverless Computing

Also called FaaS (Functions as a Service), serverless computing can bring nearly limitless, provision-less computing power to your applications. Here’s how it can make a difference for you.

Introducing Serverless Cloud Computing

Serverless computing, of course, still uses computers—serverless doesn’t mean we’re running applications without using computing resources at all. It does mean that users of the serverless computing service don’t have to provision virtual machines or directly manage the computers. This frees up developers to focus on their application development instead. For development teams, this makes development a lot easier.

For companies offering SaaS (Software as a Service) or running their own computing tasks, it makes it a great deal easier to get the right computing resources and have applications run.

Users taking advantage of serverless computing for their applications find that it has a lot of practical value. In a business setting, serverless computing essentially extends what organizations are able to accomplish with their applications and enables them to provide greater value to their customers.

In fact, serverless computing is valuable for many reasons. For instance:

  • Scalability: Serverless computing makes it much easier to scale computing resources to meet the needs of your application.
  • Access on-demand computing: Computing resources are available immediately, whenever the application needs them or whenever users initiate the system to start. There’s no waiting around for computing time to become available, because it’s already waiting and can be quickly deployed or used on schedule.
  • Unlimited resources: Truthfully, serverless computing resources can seem almost unlimited. Your application can use whatever it needs to run, even if you suddenly have additional demand you didn’t plan for. While there’s no such thing yet as completely unlimited computing resources, serverless computing can get really close.
  • Time-to-market: If you’re a developer, being able to quickly have the right resources you need to get your software ready is a really big deal.
  • Security: Whenever human error is possible, it’s bound to happen eventually that someone will make a mistake. Serverless computing helps to protect against the inevitable. This makes it easier for you to focus on your work instead of preventing every possible security problem.

For these and other reasons, serverless computing is now more popular than ever before. It helps companies achieve their computing needs without having to spend so much time on computing resource management.

Switching from traditional servers to serverless computing can generate really mind-blowing savings, like cutting your monthly costs from $10k to just $370. Wow.

Before and After the Serverless Era

Doing anything without serverless computing can be fairly limiting once you’ve experienced the benefits. Getting here to the point where this technology really became available did take a while.

Yesterday’s Cloud, Today’s Cloud & Tomorrow’s Cloud

Just like A Christmas Carol’s three ghosts, the cloud has three personalities, too—let’s talk about yesterday’s cloud, today’s cloud, and the cloud of the future.

Originally, just the idea of outsourcing your computing to another network was a big deal. That other network’s servers could augment your existing computing resources and enable you to tackle much bigger projects than you could before. This was the very beginning of the cloud. With the Internet in its early days and basic server networks available to help you get a little extra computing help, it had a lot of promise for early software development and operations.

That’s yesterday’s cloud. It had severe limitations, like very limited overall resources. It was about the beginning of SaaS, and data analytics was on the horizon but not quite yet a big If you needed to scale, that might’ve required a discussion with your vendor and some changes onsite at their facilities.

At the end of the day, you were running virtual machines, but you still had to worry about the machines—not their hardware, because someone else was doing the maintenance—but you did have to manage your computing resources closely.

Today, there’s another cloud in town. It’s trying to free us from this close management. Cloud 2.0 has often been described in terms of data. Big data, analytics, and information. With fewer data constraints, companies are free to make the most of data in new ways.

And tomorrow, the cloud’s continued growth will bring us even more possibilities, making data use more practical for a variety of different industry applications.

Serverless Implementation Examples

In recent times, many organizations have successfully transitioned their applications over to serverless computing.

For instance:

  • Trello
  • Soundcloud
  • Spotify

For event-driven applications, the move or a partial move to serverless makes sense. These applications rely a lot on input from users that triggers the need for computing resources. Until specific events are triggered, the applications may need very little at all—but once a function triggers, the computing power needs increase almost asymptotically very rapidly. In many cases it’s tough to scale these applications without readily-accessible and affordable computing power.

Why Should You Move to Serverless?

Serverless is ideal for applications that have a lot of function-driven events, such as events driven by a mouse click. It’s great for systems that rely on user engagement—and require big bursts of computing power at key moments. It would be hard to have on premises infrastructure to meet these needs. It also doesn’t make sense to have to recreate resource management processes and micromanage machine use when you’re creating or operating software that works this way.

From a technical standpoint, it offers benefits such as:

  • Supports all major server side languages/frameworks like Node.js, Python, Java, Scala and Kotlin.
  • Software lifecycle management from a single platform i.e. you can build, deploy, update and delete.
  • Safety function for smooth deployment and resource manager
  • Minimal configuration required
  • Functions optimized for CI/CD workflows
  • Supports automation, optimization and best practices of enterprise computing
  • 100% extensible functions and frameworks

Key Steps in Migrating Existing Structure to Serverless

Making the transition to serverless computing doesn’t have to be too difficult. As long as you start with a viable plan and a willingness to adapt, you shouldn’t have too much trouble.

 

 

Here’s a few steps to get you started. You’ll be setting up an account with a provider and testing it with your own function. From there, you can quickly start tailoring the service to your needs.

Adapt this test process to your own applications and business needs, but choose something simple so you can play around with your new account:

  1. Begin with an application, or an idea.
  2. Create an account with a serverless computing provider, such as AWS Lambda, Google Cloud, or Microsoft Azure.
  3. Prepare to test your new account. To do so, you’ll want to create two buckets. For one of the buckets, you’ll upload photos or another file type you’ll be transforming. The other will receive these files once you’re done.
  4. In your management console, you’ll now create a new function using the two buckets you just set up. Specify how the buckets will be used.
  5. Name your function and set it aside for later.
  6. Create a directory and set up your workspace on your local machine.
  7. Write a Javascript file or other code to use files in your new account(Here’s an example using AWS)
  8. Upload.
  9. Test your function.

Once you’ve tested the process, you can start looking at how existing code (and new, from-scratch code, too) can leverage serverless computing capabilities.

 

Is Serverless the Future of Cloud Computing?

With so many uses and promises for the future, serverless is likely to continue playing a prominent role in the future of cloud computing. It’s not for every application and company, but for event-driven functions you need a little (or a lot) of on-demand computing power for, it makes sense.

Your business may benefit tremendously from making a move to the serverless cloud. Parkar can help your organization make sense of the cloud and how it can help you reach your business goals. Contact us for more information about how we can make a difference.

 

© 2018 Parkar Consulting Group LLC.