9 ERP Implementation Habits That IT Managers Need Change To Being Agile

Large ERP implementation comes with a high level of business integration, inherent complexity in design, huge computing needs, a growing data footprint, and a large number of dependencies with other connected systems. They look too big to be nimble-footed. To thrive in the wild, speed is of the essence and a survival skill. How to speed up this bull is the purpose of this blog.


Traditional waterfall models provide a linear but structured path to a product with good predictability of the outcome. In this, project requirements are established up front, followed by design, then implementation, verification, and maintenance. They generally have a mega event of big bang go-live or complex rollouts. The project sometimes takes years to be realized, and by the time the implementation is complete, the original business requirements have long been changed. This results in complex custom development and adds to its technical debt. The new business requirements that get added over time, increase the gaps from standard, add customization, and add layers of complexity. The core becomes an application lifecycle management nightmare. This model also generates a large volume of documentation which in turn become a huge task in itself to be maintained, and eventually, most become a burden that gets preferred to be ignored than managed. 


Let us evaluate if the right focus on a different methodology, process, and tools can bring change. In that context, let's evaluate Agile Delivery methodology along with a DevSecOps toolchain for large ERP implementations to meet the omni-speed requirements of the customer.




The Agile manifesto sets the guidance to development teams. Scrum and Kanban are two lean methodologies that we will adapt. Without going into the details of how Sprint and Kanban work (the documentation around them is available abundantly on the internet), we will go into the study of what happens when we adopt this methodology in ERP implementation.


1. Iterative Delivery approach enables prioritization, proving value quickly (weekly, or even daily), while significantly reducing project risk. As opposed to the typical many months long waterfall delivery schedule, that waits until the end for user acceptance, Agile responds to specific, evolving needs with great agility in an iterative, continuous way.

 

2. Cross-functional Individuals are desired who can define, build, test, and deploys an increment of value. The interactions of team members focus on achieving the common purpose versus working together just for compliance with processes and tools. The team itself is a self-managed flat team called squads over a traditional pyramid that works in planned silos. The traditional project manager is now a scrum master, who must radiate the spirit of a servant leader.

 

3. Delivery Squad Teams in ERP projects are cross-functional and historically referres to cross-module skills (for example SD, FI, MM, etc.). This is not true, and has been confusing. In the context of an Agile team, cross-functional refers to the various functions performed by the project team – requirement gathering, process discovery, fit-gap, configuration, development, documentation, training, etc.  We typically build several such cross-functional sub-teams, each focused on a small number of systems/modules. The main objective is to build teams that are as self-sufficient as possible and rarely needs to step outside of their own team to accomplish a task.

 

4. Building Viable Products in small iterative chunks allows early validity of the solution at hand. The team focuses on iterative demonstration, which allows early feedback on product viability and focuses on value add-ons.  Instead of always thinking about the end state, the teams should continuously ask what value they could bring to the customer. The product owner manages the product backlog and works with process owners to help prioritize features to work on.

 

5. The Statement Of Work should be written in such a way that it encourages collaboration between businesses, product owners, and scrum teams. This has to have the flexibility to accept requirement changes and allow minimum viability products that can be demonstrated to businesses early. It should help allow incremental value in iterative successions. While structured contracts are needed, contract negotiations should not be the topic of discussion in governance but value realization. 

 

6. Quality Assurance needs process documentations to be in place. When Agile focuses more on collaboration than process adherance, it may appear to ignore this critical quality aspect which is not true. The ART (Agile release train) ceremonies ensures quality control happens. Iterative development brings in the right balance of documentation, quality gates, performance metrics reviews and backlog prioritization. It is true that Agile doesn’t encourage very detailed blueprint documentations upfront, but it has defined exit criteria in DOR (Definition Of Ready) and DOD (Definition Of Done) process to account for the relevant documents needed as per software development lifecycle requirements.

 

7. ART Ceremonies of the Agile release train focuses on allowing the team to communicate and improve the product along with its development. Thus, instead of one big planning exercise at the beginning of the project, they plan for the sprint in iteration. They continuously get feedback and redo planning, invariably more than waterfall, but because they do it frequently and for short sprints, they can adapt to changes easily.

 

8. DevSecOps Toolchain makes IT more responsive to business needs by connecting cross-functional, multi-skilled teams in a collaborative ecosystem that integrates development, testing, quality, operations, security, and business stakeholders throughout the project lifecycle. This, in turn, supports continuous testing, integration, delivery reviews, and deployment of stable, high-quality software, guided by continuous feedback and automation. This supports shift-left and brings variable speed to value realization to meet various customer needs.

 

9. Delivery Organization needs to re-model into a Tribes-Squads-Chapter Matrix Model. A large amount of organization change managment is required when adapting to Agile. This adaptation itself is iterative and is necessary for an organization to go from onboarding, to doing, to becoming, and achieving the end state of being Agile. This is a cultural change.



Without DevSecOps, deploying even small changes to SAP systems can be time-consuming, often resulting in long release cycles, complex application lifecycle management, and introducing risk. Lack of proper software tooling brings inertia to the workflow and that adds a large overhead to operations, thereby creating friction, and reducing customer satisfaction. With DevSecOps, teams are connected, processes are error-free, and product backlogs are visible. 


In an ERP environment, DevSecOps built on an Agile framework enable one to fail early and respond to change quickly, delivering requirements at the speed that the business needs. Stakeholders are engaged early on and are part of the development cycle. Business process owners become part of the build process, engaging them throughout the application life cycle. It reduces the business risk that those changes bring in, by demonstrating their impact early. Splitting releases into smaller batches helps bring visibility and control to an otherwise complex change and release management process. This addresses the stress that big-bang deployments bring in. Testing is continuous and it encourages a culture of adapting to automation. With iterative sprints, repeated tasks get automated. Temporary workarounds introduced in the maintenance of applications are avoided as one can move a permanent fix early due to the lean omni-speed delivery process. 


Together, Agile and DevSecOps provide IT teams with the means to deliver change quickly, in response to customer omni-speed demands, creating sustainable 21st-century delivery teams for large ERP deployments. Are you ready to relook at how you manage your delivery today? 







Related reading materials from the author:

Implement Tribes-Squads-Chapter Matrix Model For Your Organization

DevSecOps - Embedded Security With Omni-Speed DevOps
DevOps Value Chain for SAP Estate
DevOps in a 21st Century Enterprise IT Estate


Transforming Application Development Teams to a Factory Style Agile DevOps for ERP


This will analyze how one redesigns their ERP Application Development teams into an agile factory-style DevOps team.


Let us begin by defining the factory vision statement:

Application Development Factory will deliver an industrialized way for the realization of custom development for ERP Digital Core and surrounding Ecosystems in a repeatable and cost-effective way, maintaining quality and integrability. The scope of service will include technical work in the Realization or Build phase in a System Integration and Application Development Project.


Defining the Scope


List your capabilities. Look at your historical data. Pick up multiple sample projects that are successfully delivered. Break up the project cycle into individual phases. Map people and skills that were mapped in each phase. This will identify the capabilities. Map capabilities in 2D metrics of usage adaptation and impact. All that is P1 and P2 are candidates for adaptation as Capability Towers. P1 is the priority on which the pilot phase will be executed. Capabilities P3 are sparingly used and sometimes specialized skills. They are not a good fit for being treated as an independent Capability Tower, but to be considered as Subject Matter Expert (SME) pool. Capability Tower work as a factory, equipped with multiple hands of similar skills. They will be the factory. Their deployment to Epic is decided by demand forecast. Capability Towers will carry utilization targets. Each Capability Tower will have a Capability Leader who is responsible for Capability and Capacity functions. SME’s are investments and thus preferably multi-skilled and deployed on requisition. They are special forces para dropped into a Story for specific purposes.  The delivery factory ecosystem is led by a Service Delivery Manager (SDM).


Usage ImpactHIGHMODERATELOW
HIGHP1P2P2
MODERATEP2P2P3
LOWP2P3P3

 Metrics to Prioritize Capability Tower Investments


Prioritize Capabilities


It is important to test that a factory-style model will work for your organization. We need to test it through a pilot. This needs us to prioritize one capability area to invest in. A big bang adaptation of all capabilities is generally too fast a change, which fails, as there is too much inertia to traverse. It is better to start small, bite-size, through a pilot factory adaptation. Start where you have high usage adaptation and high impact. A selection of high-impact scope is important to stress-test the tools, processes, and change management. One should thus do this pilot with P1 capabilities. It is too critical an impact to fail, and if this model is to fail, let's fail early. Once the pilot capability factory is up and running, other P2 capabilities can follow the model swiftly. For most software-related programs, code development is the capability that is most used and has the highest impact. But this is not a rule.


Process Adaptation


Let us assume the code development factory as an example to understand the process adaptation. The requirements will be for multiple programs, that will come all together or spread out over time. Each requirement is captured into Epic. The factory architects will work on defining the Features and Stories and defining the Backlog. The development team will work on the Sprint, do unit testing, and send integration tests to the Quality system. Once the System integration test is done for the configurable item, a customer demo will be done. A story will be sent to a user acceptance and planned for release or deployment. The defect management, backlog, debt, and burndown will need to be done by a tool. A developer focuses on developing the enhancements or features while the tester will do an integration test. Deployments in ERP estates are done by cutover go-lives. The project ends with a transition to support the team in operations. Knowledge management is going to be the key, where assets are harvested and estimates are improved. The below diagram pictorially represents this idea.



Agile Factory Delivery Flow




Defining the Scope


Continuing with the code development capability, we see that this capability gets engaged in various kinds of activities. Listing some of them would be as below.
Configuration
Documenting the solution, including business process models
System change management and transport management
Data loads and data migration
Testing
Delivery of training and enablement
Managing the project work
Extending the solution
Integration

Again, going by the way to address it, we need to break each activity down to its individual artifacts.  This will help prioritize the adaptation. For a large team, this exercise will help. The priority action is the minimum the team should start with and then go to the follow-up activities. The chart below is representational of one such assessment. Every scope assessment story will be unique to its organization. 


Representation of Capability Scope Assessment



Qualification of Factory Work Package




Eventually, everything in the organization that needs to be built through the Application Development team needs to go through the factory. But till then, when we are in transition (that could take me many months to stabilize), we need to have a mechanism that stands in as a gatekeeper. This workaround is called Solution-led Due Diligence. 

Solution-led Due Diligence will determine and qualify the work package that will be delivered by the Factory-led Delivery team. A factory-assigned architect will need to be part of the diligence team. The diligence exercise will be signed off by the service delivery manager of the factory. The factory will define the input and exit criteria at every phase, adapt the factory tool chain and conduct quality gates. 


Solution-Led Diligence Approach



Deployment of Virtual Squads




Factory PMO will deploy virtual squads comprising of a manager, architect, subject expert, developer, and testers. The factory will maintain a team based on capabilities towers. Monitoring and measurement of backlog and debt will be done by the project manager assigned for the engagement. 

Demand forecasting would be done and capabilities towers will staff up for it. Resource utilization will be the key measure and job scheduling will be managed centrally. a program management office will be there to support the team in operations, monitor, measure, and manage.

Applications Development Factory Organizational Structure with Virtual SQUADS


An Agile delivery factory workflow is discussed in the blog Agile Development Factory For SAP.


Application Development Factory Roles




Find below an indicative table with various roles in the application development factory and their responsibilities.

Factory Roles and Responsibilities




Conclusion




The constituents of what make any Application Delivery team are Process, People, and Tools. Traditionally we have a project staffed up to meet the purpose and the program management office is focused on the net outcome. We overlook or do not prioritize the right kind of focus on the utilization of our investments, be it people or tools. Over time this customization becomes legacy, too complicated, and gathers lathery. To address the needs of 21st Century IT, we need an Omni Speed IT.

People bring in business knowledge, skill, and continuity to a program. No process can replace all people. While we can automate, shift left, and fail early, people are still pivotal to any execution. Talent is also a huge cost. And over time their fullest utilization can speed up value realization many times over.

Tools are the foundation of any project team. They are also easiest to manage as tools OEM keeps pace with delivery methodology. Its adaptation in the organization is about tailoring or configuring it to the delivery process. This is not a big challenge. Its adoption and successful usage determine its success. 

Process changes are the most difficult. While one can adapt to the change in the delivery unit, its adoption is not always up to our desired outcome. Thus, we will look into how we can redesign the process. Here we studied a typical agile delivery process, and how we adapt our application development teams to it at scale.



---x---

DevSecOps - Embedded Security With Omni-Speed DevOps

DevSecOps is a way of IT Application Lifecycle Management (ALM) where the security aspect is embedded with the DevOps software delivery framework. DevOps (Development and IT  Operations) was an application-centric discipline whereas the SecOps (Security Operations) was mapped to the infrastructure-centric discipline. DevSecOps fallow security practices embedded early in the lifecycle of software, with integration to the IT estate's security needs across the platform, storage, cybersecurity, information access, and data flow.


In the previous articles of DevOps Value Chain and DevOps in 21 Century, we have seen the various needs of an IT Estate. Landscapes are diverse and expanded, it needs a nimble process to meet the varied demand for change - Omni-Speed IT. This speed of change can at times result in overlooking critical security controls, putting the business vulnerable. Thus came the need for making security a 'must have' requirement for any software application development. In this document, we will explore what it takes for a DevOps setup to become a DevSecOps.


In traditional security operations, the primary focus used to be on infrastructure domains like platforms, networks,  focusing on external attacks over networks, physical security of premises, operating systems, firmware, etc. The application-level security like Identity and Access Management (IDM), Governance Risk & Compliance (GRC), Roles and Authorization were managed as a division within application support. Few areas around IT controls and compliance was at times left to be reactive in nature around audit risk mitigations. Thus application security was cyclical and not continuous. Some key vulnerabilities like data in motion, rouge behavior of staff, loopholes in software code which could compromise the integrity, authorizing excess access, etc. were overlooked. In ERP applications, code-level security or even a security design approach in an implementation is seldom an afterthought. Pick any analyst report and search for top priorities for a CIO, and Security would be there in the top 3. This has created the need to relook at an IT estate from the lens of security for the right treatment. Security needs are thus beyond user access and authorization.


The infographics below explain DevSecOps, how we embed Security in the Application Development Cycle. This software application security is integrated with the infrastructure security operations of the IT estate.


DevSecOps








What are the phases of DevSecOps?


We will consider what is the differential changes to an existing DevOps setup here. This has the same six distinct phases of DevOps with a twist.



The journey starts from planning. Any change, whether through a new requirement, modification, or enhancement of existing functionality will start with this phase. When a requirement is captured, it has to be validated and the purpose understood. The lifecycle of the requirement, access needs, who are authorized to view, the data it accesses, stores or modifies, the information it transfers, the criticality and sensitivity of the data it needs to works on,  compliance to regulations, etc... These specifics are to be mapped, and in your template for requirement specification documents, the section for security requirements should be created and filled.


In the design phase, the requirement is mapped to an IT specification. Traditionally workshops are done for new and complex designs with help of process consultants and business analysts. If the requirement is simpler, still it used to go through minimal scrutiny through a process consultant.  When one has to design a process, a security consultant would now need to be included along with. This consultant would play the role to map the requirement to security actionable. The consultant would need not only to understand the role and access design but also Center of Internet Security (CIS) benchmark best practice, Open Web Application Security Project (OWASP) standards, International Organization for Standardization (ISO) standards, General Data Protection Regulation(GDPR), Sarbanes-Oxley (SOX) Act, International Traffic in Arms Regulations (ITAR), IT Infrastructure Library (ITIL) standards, other various Personally Identifiable Information (PII) requirements, various country-specific compliances, etc. They will also need to identify the security requirement that goes beyond the applications in the domain of infrastructure and engage the right architects. The deliverable of the technical design document will have a section on security design.


The realize phase is when the software build and testing of the requirement will take place. The software takes its shape. It is very important to embed security principles here. For example, all code development will need to be evaluated for code vulnerability assessment, and undergo testing. Even all configuration changes have to meet all compliance and mitigate any identified risk. Security testing like role-based testing will need to be augmented with penetration testing, automated vulnerability checks, encryptions, certifications, etc. Even security designs are to be made such that even an authorized person sees only that data which is the minimum one needs to complete the task. Enabling of logging and alerting would be needed as those are the hooks that will integrate the solution with the Security Operations Center (SOC). Security Quality Gates will need to be enabled at every handshake of the application life cycle. 


in the deployment phase, the work package is prepared and released in the live environment. Based on the nature of the application product, it could be a major release, a minor release, or a new solution cut-over. Whatever may be the scenario, the work package that will be transported or compiled into, would need to be statically tested for any security vulnerability. Compliance tools like SOX conflicts are to be mandatorily checked before deployment. This is the last gate to check before a code goes live. The lifecycle of the requirement has now reached the go-live state and is into its maintenance. Any security defect that is found hereafter would be costly to repair and remediate. Security Design and Configuration documents are handed over to the support team and the system is hooked up to SOC.


In the run phase, the software application is maintained and observed to be working as per the requirements. System alerts, logs, user behavior are recorded and analyzed. Regular and dynamic vulnerability checks are conducted. Applications are also continuously monitored for any new emerging risk or vulnerabilities, need for software patching or upgrades, user access, licenses, data access, etc. Any threats need to be immediately dealt with. SOC operations can be autonomous or manual but are very important to be effective. The Security Runbook is updated regularly and all incidents are recorded.


In the improvement phase, the software application is being monitored, measured, and managed through the active engagement of architects and business analysts. Process improvements are continuous for any emergent application. Automation and reducing overheads are other drivers for further optimization. Many a time these exercises will lead to further requirements, which could be new, modified, or enhanced change. A Continuous Improvement Process needs to be set up that collates and looks at the holistic picture and proposes the opportunity to improve. IT security audit findings and observations also leads to changes and mitigations. These plans thus start as the requirement and go back to the requirement process, which starts with a plan. Thus the application lifecycle.






How do we implement DevSecOps?


To implement DevSecOps, an IT estate or an IT service provider will need to focus on four areas of governance, practice, process, and tool.



Governance will need to focus on establishing operations and compliance framework that looks deep into IT Security requirements, business requirements, the future roadmap. The group will need not only inputs from inside the organization, but also have awareness of the changes in the external eco-systems. It needs the existing change and release mechanism to be revisited to accommodate the additional checks and balances before it signs an approval. The governance model will need to take accountability for the events, logging, and controls that get monitored and analyzed. This group has to have visibility beyond just applications but to the infrastructure and the entire span of security scope.


Practice has to be developed for the practitioners to be self-aware of the need for DevSecOps. Learning and training are a must, for awareness. Everyone should understand the important role each one plays in the chain of events, and what the risks are, and how they are mitigated. The fort is as strong as the weakest wall. Developing the right skills and the right behavior is required.  The competency and capacity will need to be developed.


Process changes are a must. Security alerting and tracking is a must. Although this is not new in the infrastructure space, for many in applications space this will be something altogether new. System and audit log-based triggers and tracking would increase greater scrutiny and the same would be used by the Security Operations Centre for monitoring and alerting. In some of the ERP applications and also many applications that were behind secured networks did not spend resources in code and configuration vulnerability checks. this will need to be enforced.


Tool will play the most important role from tracking to corrective measures. Software code vulnerability assessment products have to be embedded in the ecosystem. Testing automation is a must. Governance Risk and Compliance software will play a pivotal role in housekeeping and tracking changes. These tools will need to be integrated with the change and release tool and process. A roadmap is developed on the onboarding of these tools.


Conclusion:

DevSecOps is an integral part of the software development lifecycle and will need to be treated with importance. This will need to be in the top 3 of the charter for any IT lead. This will need to bring in behavioral change in approach, treatment, and adaptation. The change management involved is not very complex but adds to the complexity of an already complex web of change and release processes. 


Did you know the toolchain bundled within your SAP Enterprise Support holds the key to a successful and effective DevSecOps for your SAP estate? To look at these building blocks, do read Extending DevSecOps to your SAP Landscape


-- x --

Extending DevSecOps to your SAP Landscape

IT managers would at one point wonder 'How do I transition my SAP Ecosystem to an effective and efficient DevSecOps compliant landscape?'


In the earlier post, we read about how one can transition to DevSecOps. But SAP landscape has some inherent challenges to the industrial definition of DevSecOps. It has multiple development environments that integrate into almost all technology platforms. A business process transcends multiple systems, both SAP and non-SAP.


Did you know the toolchain bundled within your SAP Enterprise Support holds the key to a successful and effective DevSecOps for your SAP estate? 






The Challenge:

Let us understand the challenges in an SAP Estate and how we can help them to adopt agile and DevOps approaches.
1. Not Continuous Not continuously integrated or deployed. Typical releases are monthly, while at best daily emergency changes are only moved. No simple rollback approach. The deployment integrity is part of the software architecture itself.
2. Complex Maintenance All major deployment has multi-layered integration and security needs. Outages and downtime high risk and business impact making lifecycle management a complex orchestrated task.
3. Tailored Solution Customized solutions and customizing functional configurations are huge investments, that accumulated over time and difficult to discard. Fit-to-standard where not possible leads to tailored solutions.
4. Cost of Ownership High solution ownership cost, both license, and infrastructure. This limits the availability of parallel systems, building environments on demands, etc. This is the reason why multiple projects run and share the same system.
5. Practice Gaps SAP development and configuration process has its own methodology and thus gathered inertia. SAP consultant, develop and release management need to adapt to a process that bridges that gap.
6. Tooling No single solution to support all SAP DevSecOps requirements thus the requirement of defining the toolchain that will support the governance framework. Existing toolsets thus require tweaking to adapt.

The Toolchain:

Using Solution Manager as the core below is a proposed Solution Inventory of DevSecOps Building Blocks. The tools marked with Asterix (*) are products that are not bundled in the enterprise service. They could be part of your license bundle, else could also be purchased separately. There are other OEM that also provides tools to supplement these building block in case something is not available for use. Except for Dynamic Security Testing and Performance Testing, all other products come from SAP as of date.

SAP Solution Inventory of DevSecOps Building Blocks


In your DevOps landscape, most of these products may be already available and configured. In that case, a fit-gap study would be required and deployed. Even if one does not have a set-up ready, a roadmap can be developed with sprints for early business value realization.


Thus the first step to a DevSecOps can be set up using your SAP Enterprise Support products from SAP. A foundation technology platform is available, which can be configured to scale-up, scale-out, and optimize as per your business needs.


--x--


---------------------------------------- Acknowledgments Of Contribution ---------------------------------

Thank you all for the thoughts and action on this evergreen topic of DevOps and DevSecOps and help develop proof points and take them to customers.  Vikas GoyalNiharika GoyalPatro SrinivasRajesh DadwalSulagna DasguptaMriganka BasakAbhilasha SinghRavi Shankar OjhaPooja GuptaSoumen sasmalKULDEEP SINGH. Thank you, Kapil Pandey, as a DevSecOps architect for guiding us all to excel in what we do. Team #SAPbyHCL #ArtOfPossible


Activity Based Costing Model for 'As a Service' Delivery Model for IT Services





An Example of Activity Based Costing Model in Application Life-cycle Management As A Service



 ALMaaS
ACTIVITIES
Maintenance
Planning, Kernel Upgrades, Kernel  Packages update, GUI Rollouts
Delta Upgrade
Support Packages impact analysis, Legal /HR/ Compliance Patches update, Regression Testing
EhP Upgrades
Planning, Side Effect Analysis, Upgrade Strategies, Support Pack Stack upgrade, Version upgrade, Stack Splitting, Code Correction, Testing, DBA and Infra, Interfaces, Integration
Migration
DC Migration, Physical to Virtual, Physical to Cloud, OS-DB
Refresh
System refresh, Client refresh, System Copy, Tool
System Build
Planning, Installations, Validation
Design
SAP System Sizing, Technical Architecture, HA & DR, Backup & recovery, Bolt-ons, Bill of Material consulting,
Expert Service
Performance Workloads Analysis, Service Assessment, Continuous Improvement, Roadmaps, Optimization and Consolidation









Backlogs and Constraints in Continuous IT Operations


Continuous Improvement in an Omni-speed IT estate  needs IT Operation Managers to re-look at what can be done different, effectively. IT Operations has borrowed time tested methodologies from manufacturing operations, and adapted them. In this article we will focus into the science behind bottleneck and constraint relates to IT Operations.

Imagine an operation of filling a bucket of water. You go to the tap, and It takes 5 minutes of continuous flow to fills it. If there is dirt in the tap filter, blocking the rate of flow, you would need more time to fill the same bucket. If you go to a tank and put the bucket in and pull out a bucket full of water, the same task could be done in seconds. What does it tell you?

Let’s dig in deeper in the same scenario.

Scenario 1: Your task is to fill water in a bucket. When you go to the tap, the tap has a limitation of how much water it can flow and thus it needs 5 minutes to fill a bucket. That is its constraint. If we need to fill the bucket in a second, we go to the tank. We could achieve that by alleviating the constraint, skipping the tap and going to the tank. You have a choice, but you cannot change the constraint.

Scenario 2: Your task is to fill the bucket and you go to the tap. Due to dirt in the filter, the bucket fills slowly and needs more time than the usual 5 minutes to fill. This delay is a bottleneck, holding up all the other subsequent task. To still be able to meet the time, one has to clean the dirt stuck in the filter and regain the desired flow of water. We could achieve this by taking the action to remove the cause of the bottleneck. You have control over the backlog.

This analogy can be applied to a day to day operations, in IT or Manufacturing, to explain why some process builds inertia and become inefficient.


The bottleneck theory helps to identify problems and create solutions for streamlined operations. Do note, when a process outpaces even by a single step in the overall processing, it causes bottleneck. This is generally a supply and capacity driven problem. Just like in manufacturing, IT also would need continuous monitoring to look out for backlog accumulation causing bottleneck.

The Theory Of Constraint


One general tendency is to increase capacity to address bottleneck. This would soon create a situation of surplus if not calibrated for the right speed. Speed for an IT process would not be constant, and thus for an Omni-speed IT operation, we will need to design for the flexibility in service. Where you as an operations manager get to increase or reduce services on demand.


How does a working process gather inertia?      To answer this let us look at the Supply. Say you have process to service requests. The process may have multiple process handles that could be manual intervention, autonomous handling, workflow etc. Imagine, due to an event, we have increased numbers of requests, say a project go live. One of the situation could be that the existing process handlers are stretched. Another scenario could be that within the capacity to serve was reduced, say due to a natural calamity causing outages. This will add constraints to serve. In both cases we will see accumulation at the slowest process handler in the IT operations. This will result in process gaining inertia and not perform to its design. Changing constraints over time adds to inertia. 


Some easy quick fixes could be to identify the bottleneck and adding additional capacity to process handle. This may elevate the current bottleneck, but can introduce newer ones. This is where we need to do a flow time analysis of a process. Flow time is defined as the amount of time a flow unit spends in a business process from beginning to end, also known as the total processing time. If there is more than one path through the process, the flow time is equivalent to the length of the longest path. But flow rate is an average rate, and not the peak rate. It will help baseline your existing process design, when you work on its improvement.


How do Theory of Constraint help re look at an IT process and help its redesign?      This is a methodology for identifying the most important limiting factor (i.e. constraint) that stands in the way of achieving a goal and then systematically improving that constraint until it is no longer the limiting factor. There is a continuous cycle involving the five steps.
  • Step 1:  Identify the constraints in a process. At this stage you identify the goal, desired throughput, look out for cause, identify your inventory of tools and process.
  • Step 2:  Decide how to exploit the systems constraints. Identify the methods to be used and decide how we maximize the throughput. The method should be evaluated to meet the goal of the improvement.
  • Step 3:  Subordinate everything else to the decisions of Step 2. Bring focus of all available resources, tools and processes on remediation of the constraint.
  • Step 4:  Elevate the system's constraints. Here bring in the changes, investments and process improvement. At this stage a decision to invest by management is taken on better tools, resource training, revamped process etc,.
  • Step 5:  Evaluate and if the current constraint is broken, go to step 1. Study whether solving the current constraints created other constraints. Do not allow inertia to set in. The process has to be monitored carefully as to whether other constraints now exist and to monitor the progress of the old constraint.



How do we identify constraints and bottleneck in operations?      Constraint specifically refers to a factor outside of the operation manager's control. A machine working at full capacity represents a manufacturing constraint and similarly a process handle working in full capacity is its constraint. Even an employee shortage can be a constraint. Bottleneck, on the other hand, is used in operations to refer to something that is temporary in nature. With a few smart adjustments, bottlenecks can be eliminated. And if they cannot be eliminated, they are actually constraints.


The approach to evaluate an IT process optimization will need a study into the details using Theory of Constraints to identify constraints, Study Bottleneck to identify accumulations, Flow Time Analysis to determine optimal paths, time & motion study to study process handle to seek more efficient methods of execution and then run simulations. Running successful simulations runs are necessary as any problem alleviation in a certain process handle do not assure that a new problem will not appear elsewhere. Thus when we do simulations, we should be able to not only address current problems, but also identify and fix potential problems.

Operations Needs Business Process Monitoring


"Much of what Business Process Management offers is hard to measure, but incredibly valuable: process consistency, process sharing and process innovation."
                                                                                    Gartner Top View, AV-14-9243, 20 November 2001


In the running of operations, it is important that you measure and control all critical variables in real time to ensure the smooth functioning of your core processes as designed. Processes are configured to work as per design, but due to multiplicity in the variables that goes to it, they eventually get a mind of their own. Business Process Monitoring (BPM) is the answer to keep that in track.

There are many tools that has rich out of features for this. In an SAP estate, BPM is integral to Solution Manager. This tool help documents the business process, define the business and technical KPI (Key Performance Indicator), handle alerts and events, provides guided procedures to take actions and escalate, continuously monitor and control, continuous innovation and adaptations, speed deployment by predefined catalogs and templates, out of box reporting and integration.

Implementation Journey for a BPM Life-cycle is mentioned below. It starts with a system preparedness, followed by a cyclical realization and operations. Process Monitoring has always been there since the industrial revolution. Still Business Process Monitoring as a standard way to operate in software engineering has not been realized to its fullest. One of the observation is that business’s appreciation for this is low as they do not see immediate value realization. It is thus recommended to roll out process monitoring in small groups of two to three processes per cycle. To measure realized value, it is important that process be monitored for three cycle (or a quarter) and improvement measured. One of the most critical aspect is the command center, that will be the backbone of the operation and working continuously to fine tune business and technical Indicators. The Operations Command Center (OCC) will integrate with the Business, IT, Support Organizations and Service Desk to facilitate actions and escalations.




In SAP’s Solution Manager, BPM is an integral solution built on Monitoring and Alerting Infrastructure (MAI). The below infographics shows how various components interact. Solution Manger ensure reliable business process flow and throughput by using various tools which helps in increased transparency in the business process flow; Increased efficiency in daily operations due to monitoring automation and faster exception handling. This also reduced the number of incidents by early detection of errors handled by alerts. The solution is sustainable as it reuses the monitoring infrastructure for future expansion.



The below graphics shows the key components of the design workshop for a BPM looks like. There could be variants to this, but most would cover all the five key component. The last component is the readiness of the monitoring component, which is mentioned as Solution Manager, but could be any other selected architecture.



One of the most important aspect of sustaining process monitoring is to set up an operations command center. This is a physical location, with digital display and a team of operators manning it 24X7. This is a capital expenditure and need to be planned upfront as it involves civil work and should be extendable to support future roll outs.

The BPM Lead Manager is accountable for Monitoring, Measure and Control by his operators. She track and report Benefit and Value Realization. She is responsible for Continuous Improvement initiatives and Continuous Adaptation of newer process monitoring. She is the coordinator between Business and IT. She would be the Subject Mater Expert.

Command Center Operators are responsible for Monitoring the implemented KPIs, execute operations 24X7, tracking and monitoring alert events. They would follow Guided Procedure and Standard Operating Procedure (SOP). They need to review and periodically update SOP documents. They would adapt new monitors and changes seamlessly into the support organization after every cycles of deployment. That they would escalate and communicate with help desk / responsible teams for service restoration


Business Process Monitoring is required to be part of all business operations, and by defining the right strategy, one ensures reliable flow in core business processes. Early detection of alerts and issues definitely help reduce cost, saves time, bring in efficiency and greater user satisfaction. This eventually helps reduce operations costs and bring stability to the business process landscape.






The article has been compiled by Trijoy Saikia and Kapil Pandey. There has been references made from SAP and other internet sources. The MAI diagram is made from SAP Solution Manager standard material.