With that as the background, in today’s Innocast, the endeavour is to continuously bring you pragmatic insights from technology practitioners at Innominds, as we are bringing about a very solid topic that I am sure will be enriching to all of my listeners. This is Sairam Vedam, Global Chief Marketing Officer at Innominds, and the host for today’s Innocast series. It’s titled – ‘RAD stories, cloud tales and architecting software products and applications for the Digital Next enterprise’.
In that regard, I am delighted to have Chandra Veerappaneni, principal and Chief Software Architect at Innominds and Chandra will share some of his practical insights, his wide array of experience across technology, across platforms, across industries and companies that he has worked with. Before I talk about Chandra, let me add some more context.
Through 2023, the latest IDC research is telling that coping with technical debt accumulated during the pandemic will shadow 70% of global CIOs causing financial stress and inertial drag on IT agility, and forced mass migrations to cloud. That’s a sweeping statement. Along with that, Gartner is projecting that 25% of all technology service providers, cloud-based service delivery initiatives will be underpinned by microservices design and deployment by 2022. That’s like 25% up from the marginal 5% in 2018. The other aspects that we are going to talk about – Application Modernization.
Again, IDC is projecting that by 2022 the accelerating modernization of traditional applications and development of new applications will increase the percentage of cloud native production to 25%. Last but not the least, 78% of enterprises will partner with technology service vendors that can orchestrate various technology innovations into business use cases to drive transformation at scale - mentions IDC. If you look at all of these predictions and projections, what is coming across very solidly is that businesses are mandating the move to cloud, businesses are looking at modernizing their application stack, businesses are looking at making sure that it is a multi-cloud journey they will embark, and the businesses are also making sure there will be rapid application development prototyping and proliferation of low-code no-code solutions that will be part of their enterprise IT strategy as they chart their digital next initiatives.
Here at Innominds, we are focused to ensure our global customers whether it is the enterprises, independent software vendors and software product companies or OEMs that we work with them to embark on their digital next initiatives leveraging our deep integrated expertise in cloud, cognitive analytics and devices which we call the device apps analytics. And I’m so excited to bring in Chandra today.
Chandra has about 23 years of experience in IT industry, predominantly in the areas of software development, currently working at Innominds driving a whole lot of practices in terms of the digital and applications landscape. He is a principal and Chief Software Architect. Some of the initiatives that he drives at Innominds are architectural development for upcoming iSymphony platform, a home-grown rapid application development low code, no code platform, with distinctive differentiators. He is also responsible for driving the framework and implementation of various accelerators and reusable components for our digital assets library. This is along with driving the cloud agnostic application development initiatives with Innominds, that includes design and development of cloud agnostic accelerators for common services across multiple clouds. For example, these could range from storage, search notifications and messaging, and he is a proven expert across multi various cloud factories and cloud components ranging from Microsoft Azure to Google Cloud and the public cloud deployments that we embark on.
Prior to joining Innominds, he worked in diverse organizations ranging from building virtualization solutions and then he has in-depth experience in global majors working across Siemens PLM product line, working in companies like Tata Consultancy Services and CA Technologies in the areas of PLM and Business Intelligence. He erstwhile managed the entire business intelligence group which was responsible for delivering BI solutions including reporting on dashboards and overall also has developed and delivered several features into theme centred engineering at top rank PLM products used by global majors across diverse industries. He is very passionate about product development and his expertise lies in application software development using an array of web technologies and modern-day open stack and java technologies. He is absolutely passionate about sharing his insights and teaching and he has conducted several sessions on various technologies ranging from java JEE, Micro-services, containerization, orchestration and cloud. Without any further delay, I am going to have Chandra here. Chandra, welcome to the show today.
Chandra: Thanks, Sairam for having me in this podcast.
Sairam: Thank you very much. Let’s start today. I am so thrilled to be talking to you Chandra. Would you help us talk about overall approach that you are looking from an application modernization, particularly in terms of the migration from a monolithic to microservices implementation, the iModernization accelerator that you drive at Innominds? And in the process of that, what is more important is the challenges that you see in implanting micro services. Some of the trends that you are seeing in microservices that you would recommend? And also, some of the recent implementations that helped Innominds customers in helping them achieve application modernization.
When it comes to Innominds’s own framework called as iModernize, we utilize this framework and help the customers to transform their legacy applications over to a modernized architecture. I mean the reason why customers would like to embark on that is maybe the time taken to add new features or time taken to test the entire application whenever we add new capabilities and the longer deployment cycles, longer release cycles, plus data technology stacks. So those could be a few reasons why customers would typically embark on their digital transformation journey and transform their legacy application over to a modernized architecture.
And, that’s where I’d say the iModernize framework definitely helps and what we really support as part of this framework. So, whenever we look at latest and greatest technology stacks, and also from the current pandemic situation, most often the solutions that we are now trying to implement, they are kind of going over to a Software as a Service model. So, it is also an inherent capability within our iModernize framework which means if a customer wants to build a solution which is SaaS oriented, then we look at all the basic offerings in a SaaS. So, what are those?
Probably multi tenancy because when you get over to a Software as a Service model, an inherent capability that you need to have is multi tenancy, plus billing and subscription, rate limiting. If you are exposing APIs which can be consumed by the downstream customer applications, then obviously some amount of API management, rate limiting, throttling all of that stuff needs to be accommodated for sure. This is a one inherent capability that we provide as part of our i-modernize framework. The next thing is obviously cloudification.
When I am trying to break apart my monolithic or legacy application, I want to move it on to a distributed modern architecture, to one of the basic premise on which today the entire world is moving over which is cloud. So, I would definitely want to leverage cloud for various reasons.
One, probably in terms of reduced maintenance, so I don’t have to really worry about maintaining the infrastructure. The cloud providers take care of that. The other most important factor is the inherent scaling that I can get so that I don’t have to invest any effort in trying to scale things and resources whether it is horizontal scaling or vertical scaling. Everything can be done with a click of a button. Obviously, the cloudification process also is a key capability that we provide within the iModernize framework. And as part of this cloudification also, the key components that we always focus on are whatever the applications we are going to churn out, they do follow the basic standard principles of 12 factor application. That is one thing.
The second is, even when I am trying to go modernize my application over to a cloud native app, there is this concept of vendor lock in. So, we definitely do not want to be locked into a specific cloud provider. How do I build my application that can potentially run across multiple clouds? That is also a key capability that we give in micro services. So, breaking a legacy application over to microservices architecture thought it appears to be very easy on paper but when we have to get down to the implementation, the design and architecture, the implementation part of it is definitely a big process.
And, once we have these entire micro services, that’s where things won’t stop. The next step is how do I deploy them into a staging and production environment. How do I monitor them? There are lots of complexities that surface that go around the microservices area. So that’s another key aspect that we support within this iModernize framework.
And lastly, I would say the i-PaaS. So, most of the legacy applications that we have today, if we see there would be lots of ingresses as well digress points. There’ll definitely be changes of the legacy application talking to its systems. And, that is where we definitely have this integration platform as a service kind of hook which can integrate with multiple iPaaS systems and gives you the flexibility that even after you break apart your legacy system to a modernized architecture, we still can have this entire integration scheme of things working with our integrations with the iPaaS platforms. So that I believe is a holistic view of the iModernize framework that we have created at Innominds.
Then, coming over to the next question that you are bringing up, Sai, in terms of micro services. Wherever we look at, microservices is kind of the new buzz word. I wouldn’t call it new because it has been there for the last 5 6 years. We’ve been building solutions based on distributed architecture but 6 years back if you really see, development was okay, the tooling was okay, it was not really that exhaustive as what we have today. And also, from a deployment stand point, the deployment tool kits that we used to have 5, 6 years back was not as matured as we have today.
So, when we look at microservices the first and foremost thing that we need to understand is that they may not be necessarily be applicable for all use cases. We first have to do that due diligence in trying to see. Do the microservices really fit our use case? And once the decision is yes, then we look at how organization has to be structured.
There is this rule, or a law called as Conway’s law which is typically applied and very relevant in the context of micro services. We need to have that mind set for a paradigm shift at the organization level also because the moment you embark on a journey of building micro services, you’re now going to have each micro service focused around a specific business need. That particular business requirement needs to be handled by one team and that team should comprise of everything. Whether it is your business analyst, your user interface developers, your back-end developers, data base architects, your support guys… That entire team has to be focused around that particular microservice.
But today, if you really see how we are developing applications, the standard pattern that we generally see is when we get a very big application, the entire user interface of that application is done by one big team. The entire back-end is done by another team. And when you have these kinds of independent teams, which is again very similar to the organizational structure.
There are definitely cases of friction points, and these teams probably may not necessarily be fixed. In the sense, resources come work on some functionality, they move on and then there are these unnecessary friction points. That is why we say that from an organization stand point, there has to be, definitely a paradigm shift in terms of how you structure. So that the microservices you are going to develop is going to be one dedicated team, is going to focus on each micro service and take it right from conceptualization all the way up to the retirement. So, the entire onus is on that particular team. We have to definitely go along that structure.
And the other thing is, the moment you talk about microservices the more microservices you add into your landscape, the more points of failures you are adding to the system. A traditional monolithic application, the way it is executed is – it is a single process, you have so many different modules which are running inside the process, one module talking to another module is nearly going to be a single function call. That is how the monolithic applications work.
Now, going down the path of microservices, having all these independent micro services. How do these microservices communicate to each other? Definitely, it’s over the wire. So, the moment you do that communication or bring in the notion of communication over the wire, there is a possibility of a network failure happening. So that is the reason why the moment you keep adding microservices into your landscape you are adding points of failures into the system.
That really brings in lot more challenges and how do I know that the other service is running? How do I know what is the port on which it is running, what is the IP address of that? And, in a scaled environment, what if there are multiple instances of a specific service running? So, which one do I need to route my request to? I obviously do not want to route to a really loaded instance. It has to always be a low loaded instance. So, there are all these questions that really surface and the other very important point or a challenge, I would say, in distributed microservices world is the data. So traditionally what used to happen in a monolithic application is you have a big application writing all the data into the same data base. And probably there would not have been clear boundaries. If your application has been architected well- good. But if your application had not been architected well, what used to happen is your module 1 directly accesses the data of module 2. Likewise, module 2 accessing the data of module 3. So, all of them were hitting the same database. But now in a true microservices world, what is happening? Since, we want the microservices to be independent, independently deployable, independently scalable, you are now going to have these individual microservices talking into their own database. The moment you run into that kind of a situation, you have multiple micro services, every micro service having its own data. How do I establish the relationship between this data? In a traditional monolithic application, the way we used to establish this relationship is, at least in the RDBMS world is, by the foreign key relationships. One table referring to another table. But now, in this particular case, you have the data based probably distributed separately and the data is residing separately. But how do you establish the relationships? That is where the nature of distributed transactions come into picture. Now what if I want to perform a transaction that spans across multiple micro services?
But, this entire transaction across the microservices has to be a single transaction. So, I can’t definitely use some of the principles like the two-phase commit because it is a heavily loaded concept in terms of the microservices world. So that is where again we bring in new microservices patterns, whether it is the saga patterns or when failures occur. As long as things are working fine, great. But the moment failures occur, how do you roll back? A simple example that I can take is; let’s say I am placing an order, first and foremost what I need to do is I have to check whether this particular product is available in the inventory, whatever is the quantity of the particular product that I am placing an order for. If it is there in the inventory, I have to reserve that. So that others also placing the order for the same product don’t have to run into issues. So, I have to reserve the quantity. Now, I go do my payments.
Now once the payments are done then they are passed on to the downstream services. When the payment fails, because payment is a different micro service, inventory is a different micro service, product catalogue is a different service, order is a different service, and user is a different service. So, in this distributed nature now, when something fails, payment fails, what are the things I need to do? I need to first roll back the inventory. Or maybe I hold on to the inventory for a specific period of time.
But after that I have to release the inventory because user has not proceeded with the payment. Likewise, I also have to come back and say, ‘Hey this particular order placement has failed.’ So, I have to set the status of this particular order. There are a lot of these kinds of compensating operations that I have to figure and it is much more challenging in a microservices world when compared to a traditional monolithic world.
The next thing is, the moment you have these micro services, the tooling, in terms of building and deploying also has to be robust in nature. So that is where at least today everybody is talking about containers. And, probably that is a very nice fit for microservices because it gives you that sandbox and within the sandbox I can have this individual service running. So that is a good part and then when I deploy all these things, I need to have somebody who can provide all these capabilities of service discovery, load balancing and all of that stuff.
That is where orchestrations systems like Kubernetes came into picture. It has definitely simplified how we deploy these micro services, how we monitor these microservices with newer patterns coming up like, service measures. It is definitely helping the overall story of building and deploying micro services. These are I guess from my experience of building and deploying micro services, some of the challenges we face.
Sai: Thanks Chandra. Let me get to the next part. Chandra, I think that has been quite a descriptive introduction and then a deep down into breaking of what microservices actually means. Some of those are rich insights that you shared, and also the ability to decouple applications, decompose it and develop it. Some of the patterns that you spoke about, the emerging and the new patterns. And I also understand that it has got a lot to do with the way cloud application development can also happen. Concurrency comes into the picture and many of those facts of modern day scalable, distributable, robust applications rely on microservices architectures which also means it is highly proven why enterprises today are racing to modernize their legacy applications so that they take advantage of what microservices bring in. That was so nice.
Let’s get to the second part of it. Gartner tells by 2024, 75% of large enterprises will be using at least 4 low code, no code tools from both IT application development and citizen development initiatives. That is massive.
It also tells that low code application development will be responsible for more than 65% of application development activity. In that sense, can you talk about the approach towards the rapid application development and low code, no code application development that you are driving in? As well as what are those various RAD and low code no code tools that Innominds is working upon as part of its enterprise RAD focus.
And, what kind of competencies has Innominds forged to help enterprises take advantage of this very significant paradigm? From there we will later talk about what is the IP that we are building at Innominds. So, can you talk about the RAD approach and some of the enterprise RAD initiatives that’s going on so that our viewers will be keen to understand them?
Chandra: Sure Sairam. So, when we look at RAD, it stands for Rapid Application Development as the name itself says - How fast I can develop the applications. And it is not something which is new. It has been existing for almost 20-30 years now. But what has changed between then and now if we really see. It’s not just application development per se, but it addresses the entire gamut of SDLC, or software development life cycles which means, whenever we talk about rad platforms, it is like a one-stop shop for all your development needs. So you get on to this platform, you start modelling, you start capturing your requirements, you come do your design. If you look at applications, one of the fundamental designs that you would typically do is the database modelling, or the ER modelling.
Based on these requirements users are going to use RAD platforms. They go start designing their business domains, business models. Based on those business models, that’s where they go and specify – Hey these are my attributes, these are my domain models, and the relationship between these domain models, these are the constraints, this is how I would like to expose my APIs, these are my pre-integration hooks or post action hooks. So, the RAD platforms give you the flexibility of modelling your entire business in a pretty much drag and drop fashion.
Now once you go build out your business models, the next step is essentially going ahead and building your entire user interface – pretty much in a drag and drop fashion. You don’t have to necessarily have that technical competency or technical ability and that is a reason why we say that the low code no code forms are also more focused towards the citizen developers. So, they absolutely don’t have to worry about the technology. All they do is – okay this is my business, this is how my flows are supposed to be defined, this is how the UI is going to look, this is the branding, these are basically some amount of the business models that we want to do. And probably a small paradigm shift that we have seen with a couple of these low code, no code platform vendors. For example; if we take AWS HoneyCode or Google AppSheet, they don’t really start with the ER model. They don’t start with tables or entities.
The citizen developers are more familiar with excel kind of terminology, rather than an entity relation kind of terminology. So the users come, model their applications in a spread sheet driven or a spreadsheet-centric kind of approach. And after your application has been build, your user interface is created, interaction with the back end is setup, the next step that would come into picture is DevOps part of it. So that is where – How do I take this, how do I enable pipeline, how do I test it inside the environments, and the ultimate step is the continuous deployment part.
When we look at the entire RAD story, it is not just developing your application but it also deals with capturing the requirements, doing the design, enabling the collaboration, getting a live preview of what you have created, and then build and deploy with enablement for continuous integration and continuous deployment. Now the benefit with this approach is what you generally may build. Let’s say if you are taking 3 months, there are definitely external platforms out there which can reduce that time of 3 months to 1 month.
And what is in it for me? It gives me the flexibility or advantage of building features, taking it to market at a much faster pace. My GTM strategy is much accelerated plus the feedback cycle is also accelerated because I can take features to the customer at a much faster pace. I get the feedback and I come back and retrospect. My feedback cycle, my GTM strategy everything is accelerated. That’s basically the biggest benefit I would say in terms of the RAD low code, no code platforms.
Sai: Great. From the enterprise RAD tools, I understand it could be out systems it could be Appian, OutSystems, Mendix, SAPUi5, or Microsoft power apps. I think we are looking at a cross set of enterprise grade, low code no code rad tools. What sort of competency building that is currently happening in Innominds, any insights on that and how do you think each tools are marginally different from each other? Where do see each of them playing? Any perspective on that would be very useful.
Chandra: We have been investing a lot on upscaling in terms of these low code no code platforms. Predominantly we are focusing on Mendix because it is a leader if you really look at the Gartner quadrant of low code no, code platforms. Mendix is there at the top. Off late, they are also focusing and emphasizing a lot on IoT scenarios, IoT use cases. Given the fact that Innominds as an organization are into the devices, apps and analytics, it is quite obvious that we would definitely want to build solutions across these 3 spectrums and this is where Mendix can definitely play a role for us.
When we look at PowerApps like from Microsoft, PowerApps are also powerful in terms of giving enormous amounts of components, enormous amounts of integrational capabilities. That is also another area that we are investing in. OutSystems is another one, likewise Appian. Appian is also something that we explored. With respect to the extensive ability part of it I would say, at the end of the day when we utilize these platforms, there should also be a capability for extending generated system. It could be a code-full extension or it could be a codeless way of extending the system.
That’s where I would say in terms of extensibility Mendix does give us that capability, OutSystems does, Power Apps does. We’ve also embarked on doing some more research on it like AWS HoneyCode as well as Google AppSheet. They have a completely different model of how users go about building applications.
Sai: Fantastic. So that’s a cross functional set of low code no code platforms and tools that we are exploring. In that sense it fits the bill that Innominds continues to be a vendor agonistic digital transformation services provider where customer’s need comes at the centre of what we do. I understand your strategy. So, let’s shift gear. I know you’re co-architecting the i-Symphony platform which I believe and I know that it is for sure it in its early alpha pre-beta kind of stages. So, there have been some production-level deployments. But more importantly why don’t you elucidate a peak view into iSymphony for some of the architectural differentiations and what are those components of iSymphony that you see make a significant acceleration. I understand it is also for ISV acceleration, ISV RAD application development. Can you talk about both from a technology stand point, why is it different and also from a business standpoint? Where do you think it can add value as you see each day?
Chandra: Sure Sairam. As you rightly mentioned we have this ISV category where their major scope or goal is to develop software for customers. Gone are those days where I embark on a journey to build application or deliver 10 months down the lane. So that’s not what really the expectations are now. When we look from ISV stand point, they definitely want to develop the applications and they should have that control. They want to have a control of what the code is, what are the technology stacks that they want to utilize and who should be in control of the deployment as well?
Those are some of the key aspects from an ISV stand point. If we look at all these low-code no code platforms that we have today, some of them do give out the code. But majority of them, I would say are more or less like vendor locked. In the sense let’s say I develop or build on PowerApps, I use it and build my applications, obviously hardwired on the Azure cloud. Is it possible for me to download that entire generated application, take it on to a different provider, or take it into my own data centre and run it over there? Probably not.
Likewise, it’s the same for AWS HoneyCode. But they do definitely give different powers. In the sense, completely leveraging the managed services of the cloud which they may definitely go and scale. That is exactly the place where iSymphony becomes a differentiating factor. For us, code generation is a first-class citizen, which means we start with code generation. And again, when I say code generation, it is not a machine generated code, it is still human generated code which is templatized and caters to various scenarios and use cases. That is the biggest distinguishing factor I would say and what does that mean for me?
Definitely there is absolutely no vendor lock. Means you use i-Symphony and build out your models, you generate the code. It is out there for you to download the code. Do you want to run it inside your data centre or do you want to take it into a completely different provider? Yes, we do provide the capability with lots of different cloud providers. We integrate with lots of cloud providers.
So, I can take your code I can deploy it on to Azure or AWS or JCP or let’s say I have a Kubernetes cluster inside my data centre. Yes, we can go deploy your generated code over there. That’s the first thing.
The second and biggest differentiating factor also is – we are not limiting to a specific technology stand. Which means – today we are generating java with springboard framework, and node JS and dot net or development in progress.
Hopefully the NVP we are going to have in the near future, we should have those technology stacks supported as well. That is a technology agnostic way of code generation. I would say that is definitely a big differentiating factor when it is compared to rest of the low code no code platforms.
The third factor I would say is in terms of extensibility itself. For example; a customer is a scala house, obviously an existing i-Symphony platform does not give code generation capabilities for Scala. In such a case, the majority of the framework is given by iSypmhony platform in terms of how you model your application, how you model your data models, business models, etc. The static model, runtime model, presentation, everything is going to be provided by i-Symphony platform.
All that you are going to do is utilize concepts that i-Symphony gave, you go write out your templates for Scala, and you are pretty much done. So you don’t have to write all these additional logics of how to interpret, how to build your business models etc. All of that stuff is taken automatically by the platform.
And the fourth which I guess is another bigger benefit is, as an enterprise let’s say you would have developed lots of applications, and there would definitely be lots of components that are going to be utilized across applications -common components as I would call. With the tooling that we provide, you can go take out your common components and then assetize them as accelerators on your platform.
Which means your common components can be converted as accelerators on the i-Symphony platform. The benefit with that approach is for all downstream applications that the enterprise is going to build, you have your own common component that you can utilize. These are some of the basic differentiators. From a version control system stand point we support Github, Azure Repos, SVN in the pipeline.
Likewise, from a continuous deployment stand point, we do support all the major cloud providers and also if you are talking about on-prem data centres, we can deploy it into Kubernetes cluster. Integration is an area where we are definitely focusing. The third-party integration with respect to Salesforce, SAP, that we have in the MVP. And the collaboration is something that we are also planning to have. So that should give you a holistic access to the different phases inside of the process right from deployment to retirement.
Sairam: Terrific! As I understand one of the significant differentiators is the ability to ship out human generated code unlike machine generated code, unlike any other low code no code tools. Second thing is it is a well-tested code. Its integrations with things like Sonar tube What I also understand is that you are kind of focusing on the ability to have it integratable with third party tools as we move forward and as I understand it is in the early phase.
If I am not wrong, I also understand there are some few active customers that you have done some pilot deployments for. How has it been with the experience so far? Can you talk about some of the benefits and use cases that you were able to solve and showcase with i-Symphony as of now?
Chandra: We have utilized it in few projects. So, one particular customer is into battery management systems and that is rare. We utilized few of the accelerators with i-Symphony platform, whether it is user management, data base authentication, notification capability, containerization part of it…Some of these have been utilized and it definitely has brought in savings, at least 4 to 6 weeks of effort I would say. Likewise, we have used iSymphony also in another customer who is into the cyber security space.
And, as you know in cyber security, the code quality has to be top notch and that is where whatever the code we are generating out of this i-Symphony platform, it has gone through security and vulnerability testing. And we follow secure coding practices when you generate this code and run it through information security testing. We have ensured that there are no vulnerabilities within the code we have created.
For this particular customer, what we have utilized is the entire multi tenancy deployment. Multi tenancy as I said is not a simple thing. The requirements vary from customer to customer, tenant to tenant. In some situations, a tenant can come back and say – Hey I want my data to be a completely separate database. I don’t want anybody else’s data to reside in my instance.
There could definitely be mom and pop shops where they are okay if their data is co-allocated with somebody else’s data as long as they bring in that security aspects where in one user or a tenant does not see the data of the other tenant. they are okay.
So in this case, you are talking about a shared database deployment model, a dedicated database deployment model that is a concept of hierarchal multi tenancy etc. So, all of this was something that we utilized from iSymphony platform. Likewise, the user management, r-bac, p-bac, notifications capability, containerization, CI CD part of it. Here I think we have significantly saved effort.
Sairam: Excellent. So, a summary as I understand, I would say i-Symphony today is strategically positioned to accelerate hard core technology development for an ISV developer base, for ISV product companies. And features like tenant management, in build integrations, and the strength of i-Symphony is back-end with pre-fabricated APIs, ability to ship out secure well tested code.
I think I can go on that these seem to be some compelling differentiators and most importantly shipping out human generated code which will be a significantly compelling proposition for ISVs to consider. In that sense you have your RAD strategy well chartered out. While from an enterprise stand point you are looking at leveraging industry standard tools while from an ISV rad acceleration, you are bringing on iSymphony. Great! So we have covered two significant parts today – iModernize framework and accelerators that are doing application modernization and micro services. iSymphony, which is a home grown low code no code platform, tailored for ISV product development and acceleration.
Let’s move to the third and most important and very interesting subject. By 2022 as I repeat 70% of enterprises are expected to deploy unified high-breed multi cloud management technologies tools and processes, according to IDC. In that sense I understand as the cloud architect that you are at Innominds, can you throw some perspective around cloud native applications and how are you going about building them? And what are the challenges that you foresee in cloud agonistic applications being built? After we do that, we can talk about iMigrate and some of these accelerators. Can we start with some pragmatic insights around cloud native application building and cloud agnostic application building?
Chandra: Sure, Sairam. When we talk about cloud native, as the name itself indicates, its native to a cloud. And one of the main reasons why the customers are embracing cloud is obviously from an infrastructure stand point. You don’t have to really invest any effort, for maintaining, upgrading, patching etc. Definitely from an infrastructure stand point, it is a faster way of spinning up. That’s one.
Second is, the critical component is the scale factor. You could go and tweak your settings in such a way that – hey during these periods of time maybe I want to spin up an instance or you could and tweak it based on something like the metrics as long as the CPO is below 80%. Then I am okay with single instance but the moment it goes beyond 80% I want to spin up instance.
Some of these key factors enable the customers to want to move over to cloud and when we talk about cloud native, what we are saying is that there are lots of services offered by cloud providers and the cloud providers take care of maintaining them and scaling them based on what we configure. We call these managed services. When I am developing a cloud, an application, that is supposed to run in the cloud, but at the same time leverage all the native services, or managed services of the cloud. That is where we kind of let go and term it as a cloud native application. And as part of this effort, some of the managed services, for example; I want to upload files in my application, so where do I upload them to?
That is where S3 can come in if we are talking about AWS. Likewise, if we look at Azure, Azure has an Azure blob storage. How your application is going to interact with these is – most of these cloud providers give you SDKs. You’re going to write your application and wherever you want to upload your files that is where you would go.
You would utilize SDKs and upload this file to whatever is the respective storage that’s offered by the cloud provider. Likewise,you are going to have other services from a notification stand point, whether it is from a message brokering stand point, in terms of spinning up infrastructure. So for all of these, cloud providers five you those APIs. So, when I write or develop an application that is going to leverage these native services of the cloud, I am inherently trying to develop a cloud native application.
At the same time, when you are developing a cloud native application there are some fundamental principles that you have to take into account. This is known as 12 factor application principles. I won’t really get into those details but these are some of the fundamental principles that we have to account for whenever we talk about developing cloud native development applications.
Coming to your next question Sairam- the cloud agnostic part.
This is one of the common I see whenever we used to interact with the customers. Customers would come and say – ‘Hey, I want to deploy my application into an AWS’. But at the end of it they would say, ‘I don’t want to be tied to AWS’. The first statement says I want to deploy my application into AWS and the immediate thought goes through your mind is, you’re better off developing this as a cloud native application suited for AWS. But at the end of it, what’s happening is we don’t want to be tied to AWS. The question that really balls down to is – how do I develop my application that can run in AWS, that can run in Azure? Or GCP in the future.
So that is what we as an organization has embarked upon – an agnostic approach. This agnostic approach works for all the services that are common across the cloud providers. If my application is trying to leverage a specifically managed service in the cloud, and my application has to support another cloud provider also, what we are going to do is write a cloud agnostic component. And my application is only going to talk to this abstraction. So behind the scenes, its’ more like a plug and play kind of mechanism. A simple example could be -S3 – let’s say I am uploading a file.
My application is never going to talk directly to AWS SDK or Azure blob storage SDK. We never do that. My application is always going to talk to these abstractions that our cloud agnostic components give you and behind the scenes, depending on what has been plugged in, if you plug in AWS SDK then your application starts writing to S3. If you plug it in Azure blob storage connector that we have then it starts writing to Azure blob storage. So in a way your application is completely unaware of where you are writing to. That really brings us to a tighter integration with a loosely coupled model. So that has been our approach in terms of building a cloud agnostic.
There have been some instances where customers are very particular about a cloud provider and do not have any plans of switching over to another cloud provider. In such a case, we definitely go for cloud native approach. But in some situations where customer is open for their application being deployed across different cloud providers, we do it in a cloud agnostic way.
Sairam: Got it. So in that sense how does iMigrate come into the picture? I know it’s an accelerator again that you have implemented, that you have detected along with the 6R approach to cloud. Are there any recent implementations that you can think about in general and how iMigrate is playing a role there?
Chandra: As far as iMigrate is concerned, whenever we talk about migration to the cloud, there are lots of components that you have to account for. One is probably an infrastructural migration. That is where we look at all the existing hardware and see how we can move them over to cloud. Then the second is security and compliance part of it. How do we secure all these infrastructural pieces so that we have moved to the cloud? Third is data migration.
So, even before you spin up your applications out there you have to make sure that you take whatever data you have inside your data centres, move them onto respective data sources in the cloud and then do the migration. Fourth thing is application migration. Whatever applications are running, take them and connect it to the data sources running inside the cloud. That is where 12 factor application principles again play a critical role. Once you are able to migrate your application, then comes your continuous integration, continuous delivery enablement and operational excellence. If you see, these are the typical stages I would say that have to be considered when we do cloud migration.
Across all these areas, whether it is infrastructural migration, we do have several scripts, some utilities that can sniff out, maybe look at your landscape and then try to provision similar kind of infrastructure on cloud. From security and compliance, we again have something like tooling that can create your VPCs and security groups and policies and it is a configuration driven approach I would say. And when it comes to data migration part, that is where we instead of developing our own data migration utilities, we rely on cloud native data migration capabilities. For example; Azure migration hub, AWS has its own migration tools. So, we rely primarily on that.
When it comes to application migration that is where we depending on the customers requirement, whether they want to go on the route of agnostic approach or native approach, our libraries definitely help in easing out the application migration. We also have tools that probably scan your entire application and can spit out some sort of networked diagrams saying – hey your application is talking to these many different kinds of resources and gives you high level representation saying - these are all data sources that has to be made available in the cloud before I start migrating my application. So that is also something we have.
From a continuous integration and continuous deployment stand point, we do definitely have some of our own home grown scripts in terms of leveraging whether it is Terraform we talk about, or from deployment stand point we also have scripts on ARM, which is Azure Resource Management or AWS Cloud Formation depending on what cloud you want to move and if you want to utilize the native approach.
And, the operational excellence, is definitely an interesting thing for several of our customers who are already on the cloud, where we have developed tools and run those tools on their infrastructure and we have identified that customer was unnecessarily running into these expenditures for some of the resources that were never being utilized. So, from a costing stand point, we have addressed that excellence part of it and we do definitely have several scripts that can look at your entire landscape and suggest some improvements or optimizations. I guess that’s what I would basically say from iMigrate stand point. What are the different areas that we cater to in terms of migration?
Sairam: Very insightful. We covered iModernize from a microservices stand point. We went into iSymphony from a low code no code stand point. We came back to iMigrate from cloud engineering, multi-cloud management and migration stand point apart from in-depth architectural insights on how all of these technologies are playing.
Before we call it an end from a developer stand point, in general as a Chief Architect that you are, an industry leader that you are, can you show us some part in terms of some of the emerging design patterns and development paradigms while ensuring cutting-edge technologies used to build future ready applications. Whether it is conversational AI applications, paradigms around multi cloud and server less computing or building high performance SaaS applications. What are the architectural compulsions and design patterns that you foresee developers need to be aware of? That would be one of the last things I want to ask you today Chandra.
Chandra: I would definitely say from the trending standpoint, Sairam, microservices with service measures is for sure something that we have to account for. Service measures, containerization and orchestration, key aspects when you are trying to build distributed microservices kind of an architecture. Another interesting thing that I have been seeing is the growth of GraphQL.
Traditionally, how we have been exposing our capabilities to the outside world is via the rest APIs. This is the means of how we enable users to consume that data we produce. But there are definitely some challenges, especially when you are talking about the same API being leveraged or used by different channels. When I say channels, it could be a mobile channel, it could be a web, a smartwatch, or it could be a smart TV.
There could be different consumers of the same API. Now it is obvious that you can’t return the same data for all the channels. If you go to the traditional routes rest APIs, unfortunately you call the rest API and it gives the same kind of pay load irrespective of what channel you use. That is where definitely GraphQL gives you that sense of filtering out what you don’t want and ask only for what you need. The same API can definitely work across multiple channels. That is a key point, I personally would see in terms of GraphQL being adopted more and more going forward. That doesn’t mean rest is going to be made obsolete, but both of them have their own strengths. But I do definitely see GraphQL being utilised more and more.
Also, the conversational part you are talking about – i-Symphony also has a component that can talk to the chat box solutions or conversational solutions across multiple cloud providers. We integrate with AWS Alexa, we integrate with Azure bot and likewise Google’s dialogue flow. The conversational interfaces is definitely another next big thing I would say. Now because of this pandemic, we are seeing the adoption of cloud more and more. That really brings us security as a paramount part.
Which means, we write code as developers. It’s no longer going to be just regular coding. We have to ensure that we are bringing in secure good coding practices and PaaS, a pretty decently documented in terms of how you write a secure java code, that’s another thing. And probably the last one which I guess you are also very comfortable with is AIOps. That’s definitely going to be another big thing.
Sounds good. So that’s for it today. Thankyou Chandra. Here it is for all of you viewers on behalf of Innominds’ Innocast. We heard today all the variations from microservices to AIOps, from cloud engineering to low code no code, from building secure applications to rapid application development, and also some rich insights with future trends.
More importantly, a very detailed pragmatic journey of an architect, of a technologist, of a disruptive digital business outcome-driven leader who has seen and done it. And that is what we will continue to do at our Innocast podcast series on behalf of Innominds. We will continue to come back with more rich insights as organizations embark on their digital next initiatives. Signing off on this initiative once again, this is Sairam Vedam.
I will be glad to connect you back with my leaders if you have any questions. The podcast will be soon available live and we will send out a communication for all of you to hear that. Do keep listening to these. For anything you might want to have, any questions, feedbacks, inputs, and suggestions, please write to firstname.lastname@example.org.
Thank you very much everyone. Stay safe. Thank you once again Chandra for being very patient and very insightful today.
Chandra: Thank you
Sairam: Thank you, guys.