The four pillars for implementing an analytics solution in the cloud


Arvind-Ravichandran

Arvind Ravichandran, Associate Software Engineer
Bosch Engineering and Business Solutions

If you guys have read my previous article, I would have emphasized on the term IPSR, the most essential four letter word that is utilized to arrive at an End-End architecture. Once you have arrived at an architecture that suit your needs by benchmarking and selecting the right tools and databases, IDDT is the next thing to look at.

In simple words, IDDT is giving life to your architecture by bringing what is on the paper to reality. Planning the infrastructure, engineering the software development, practicing devOps and testing your system to make it resilient and robust is the essence of the IDDT approach. Rather than going too technical and deep, I will keep it simple by detailing about my experience in implementing this approach.

Infrastructure

Being in this industry for a while, I am suddenly witnessing a massive shift or interest inclining towards cloud and cloud-based solutions (Even the numbers prove it - Microsoft hitting $100 billion in annual revenue for the first time and Amazon, 2nd company to hit $900 billion valuation). This is obviously a win-win for everyone - the developers, the clients and of course the cloud providers. There are plenty of articles circulating the web since ages on the advantages of cloud and I won’t be repeating them here again. Personally, I love cloud as it is extremely quick to provision whatever I need, experiment on it, shut it down if I don't find it suitable, and all this can be done in a matter of hours. I still remember my initial days where I spent weeks installing Cloudera Hadoop distribution. Also, it is never easy to arrive at an on-premise infrastructure sizing as you would be estimating it in advance. However in case of cloud, you can easily scale as you grow.

There are three main cloud service models viz IaaS, PaaS, SaaS. Here we developers, must decide on the model that best suits our use case in terms of time to develop, resources, cost and maintenance. To understand the three cloud models, let’s take an example of running a restaurant. If you need to run a restaurant, providing just the building is IaaS, giving you space in a shopping mall's food court is PaaS and giving you a ready made KFC shop is SaaS.

Deciding on the right services to be utilized in the cloud is an art, as there are various metrics to be considered in choosing a service over another. We as architects, brainstorm together on getting the right mixture of cloud services considering various factors that include security, application type, storage, time to develop, ease of provisioning etc.

Development

As the infrastructure is getting ready and provisioned, the development team will start building their codes in parallel. The core area for any development team would be its data processing layer where the data load is heavy, resources are utilized to its maximum, complicated logics and algorithms are designed to work in parallel using distributed processing frameworks, ML models are built by understanding the data, feedback pipelines are designed for the ML models and the list goes on.

For any ML development, the data pre-processing and the feature engineering is the core. The core evolve strong only when the data scientists have precise understanding about the data, as a result, models with high accuracy and minimal errors are built. To achieve this in a production environment, a solid ETL layer that extracts data from various datastore (be it transactional or operational data) is required, which in turn transforms the data (which is pre-processing and feature engineering in data science terms). This extracted and transformed data is stored in a distributed storage layer (also known as the staging layer) for further processing (ML algorithms).

DevOps

One of the most vital question that one must answer while building services for the cloud is 'How to host them?’. These services can be hosted either on dedicated servers, on virtual machines, as containers or may be just as processes within a single server. To arrive at the right deployment of our services we should be clear on the fact of isolation vs density.

VM was one of the coolest tech in the 90’s, not just for application developers, it was a boon to everyone. For example, when an apple fanboy wants to enjoy a FIFA game that can be run only on windows, VM gave him that possibility. But in the recent years, containers are the dominant players who occupy the sweet spot in the isolation and density bar having the advantages of both VM and process. We always try to go with containers as it is lightweight and extremely fast. Also packaging the applications as container images and running them on Linux servers has made deployment every simple.

Apart from deploying, the other two major responsibility for a DevOps engineer would be automating and provisioning for further upgradation. Automation not only includes scheduling but also designing efficient load-balancing of the services.

Testing

In the Big Data and Machine Learning world, testing is a complicated phenomenon as there are many overlapping standards within the industry with no consistent test levels or test types. Hence, I will just share my experience from a high level view on how we approach testing for our solutions. Our main goal is to achieve a resilient and reliable systems by minimizing the effect of failure and in turn design the system to handle failure. Our core activity would be performing a failure mode analysis, identifying possible failure points and defining how the application should respond to those failures thereby making them self -healing.

Apart from the above, typical software testing approaches like testing individual application component (Component Testing), their interaction with other applications (Integration Testing) and testing this complete layer as one system (System Testing) will also be conducted as a part of test activity. Performance Tests are conducted to unveil the systems limits, to arrive at the right compute and to decide on indexing, shards etc. End - End testing is done to perform a dry run and monitor the whole system before stamping them prod-ready.

End Notes

Hope you guys found this article useful. I would like to reiterate the fact IDDT approach is not any industry standard, it is just my way of how to go about architecting and building cloud applications. Below is a teaser image of an architecture that was designed to build a platform for advanced analytics in the cloud.

We always want our architectures to be future ready and aligned with the current technology trends. The above architecture includes server-less computing, microservices architecture, running the applications as docker containers and ensuring consistent deployment and releases using CI and CD. The architecture has also been provisioned with the configuring tools to capture the key metrics such as CPU utilization, memory consumption, response durations and message queue lengths for evaluating these metrics with predefined thresholds to make decisions on auto-scaling.

That’s all about IDDT, an approach to build scalable, robust distributed architectures in the Cloud. Let me know if you've found the article interesting and also if working on similar lines. Feel free to share your views and queries. I would like to address them to the best of my knowledge.

Subscribe to Industry Era



 

Events