[tweetthis twitter_handles=”@patriciaadams @LANDESK” display_mode=”button_link”]Mistakes Not to Make When Implementing an ITAM/SAM Program[/tweetthis]
When it comes to IT and software asset management (ITAM/SAM), or any new IT project, it is critical to ensure that the program is successful the first time it is rolled out.
Without early successes, it is difficult to maintain ongoing business unit buy-in and senior management endorsement. Support from the wider teams may also be difficult to maintain leading to processes that aren’t followed and tools with inaccurate data.
Therefore, finding the low-hanging fruit that can yield savings quickly is imperative. These low-hanging, hard-dollar savings opportunities are mostly process and policy driven, and should be on every asset manager’s tactical, short-term, 18-month roadmap. The longer-term, strategic projects that require time-based data can also yield concrete savings but will require automation and integrations to make them viable.
To ensure project success, learning the mistakes that others have made when implementing ITAM/SAM can go a long way toward ensuring you don’t make the same mistakes that led to program failure. With that in mind, we have identified mistakes within the four key areas that define a successful ITAM/SAM program:
There is a natural progression in these four areas because automation is not going to contain accurate data unless process and policy are being adhered to.
If governance isn’t addressed early on, the whole program could be based on quicksand that shifts every time a new business or IT project comes up. I call it the “follow the shiny thing” strategy. When C-level executive priorities pivot, ITAM/SAM strategy should support the new direction, but it must still focus on the fundamentals that make a corporate-wide program effective while being agile.
Governance is foundational to ensure ITAM/SAM program success
Having said all of that, let me begin by stating that governance is foundational to ensure ITAM/SAM program success. Without senior managers, at the Chief Executive, VP and Director level in finance and/or IT operations, and key stakeholders in intersecting IT domains, endorsing and supporting ongoing program investment, most programs are unlikely to be in place beyond 18 months.
Alternately, the program might be realigned to a different reporting structure in the hopes that a different senior executive can have a positive impact. When programs are realigned to a different reporting structure they can quickly lose momentum while the executive learns about what wasn’t working.
To prevent this from happening, it is critical to document the urgent problems and the “wish list” problems that should be solved, and segmenting them into tactical and strategic on the ITAM/SAM roadmap. These problems should correlate to the projects that the CIO, CEO, CTO, CFO, CISO are focused on. If digital business, security, new ERP system, cloud applications, and/or mobile are priorities for the coming year, ensure that the ITAM/SAM roadmap focuses on how it can assist with delivery of those goals. Senior managers like to see how a shared service, such as ITAM/SAM, is going to enable the big picture projects as well.
Following this suggestion to start with governance will help make your ITAM/SAM programs effective and support will be ongoing, not just at the beginning. Having executive sponsorship early on is critical to maintain buy-in as changes and disruptions occur to the business. There won’t be a question about what type of value ITAM/SAM is returning because it is part of the larger team delivering value at every step.
Implementing an effective ITAM/SAM program isn’t any more difficult than trying to locate and catch a Snorlax or evolve into a Vileplume in Pokémon GO, though many organizations feel that it is too complex, too difficult and too resource intensive.
Additionally, many organizations experience only one or two software license audits a year, they don’t feel it justifies the cost of an ITAM/SAM tool or building a disciplined program.
Investing in an ITAM/SAM tool is similar to working out at a Pokémon gym.
If one of the goals of Pokémon is to own a gym that will lead to an increase in your stash of gold coins, then you need a group of team members with different strengths (e.g., psychic, fire, water) to help you defend it.
Similarly, the more work you do around ITAM/SAM process, policy, and governance, the more effective your program will be. Having a diverse team with strengths in contracts, licensing, capacity and demand management, and data analytics will help your organization easily defend against threats to the IT budget in the form of an audit, asset lifecycle or security.
Improve the combat power of your ITAM/SAM team.
Taking over a Pokémon gym by defeating another team isn’t easy if that team has many defenders with high levels of combat power.
To improve the combat power of your ITAM/SAM team, you must ensure that your tools are robust and can help the broader IT team, rather than staying siloed in ITAM/SAM. Integrations into other key data sources are essential in order to reduce the amount of manual data entry necessary to ensure the data is current.
Training your team members provides them with the necessary skills to solve problems. Having the skillset necessary to move from a reactive to a proactive position is essential. Metrics that focus on tactical and strategic problems will identify opportunities for savings. An example of a tactical metric is asset inventory accuracy and a strategic metric is capex vs. opex asset spending.
An effective ITAM/SAM program/tool is like owning a Pokémon gym.
Overbuying software licenses is a hedge against an audit, but an effective ITAM/SAM program doesn’t need hedges. It’s akin to “owning” a Pokémon gym and having your fellow team members acting as defenders. Everyone on the team benefits because they get more PokéCoins every 21 hours.
In the ITAM/SAM world, the “defenders” come in many forms, but the most useful ones are accurate data reports about what is being deployed, purchased, and actively used in your environment. And the PokéCoins you accrue are the facts and details needed to respond to an audit request. This frees up time that can be spent on higher-level, strategic tasks that add value to the business.
Earning PokéCoins is like saving money on your IT budget.
The PokéCoins earned can be redeemed for other purchases later, just as the savings gained by optimizing deployed and purchased software can benefit the IT budget. These hard-dollar savings from risk and cost avoidance can then be allocated later to new technology investments, thereby benefiting other IT domains or supporting the business that has a constrained budget.
Increasing your ITAM/SAM prestige by strengthening the resilience of your program means you can plan for an audit without wasting time or overestimating your budget.
The physical health benefits of catching Pokémon are similar to implementing a healthy ITAM/SAM program.
Similar to the health benefits achieved by walking or running to get a lucky egg to hatch in Pokémon, ITAM/SAM programs help you control application proliferation and manage lifecycles to prevent the build up of technical debt. Usually, the more time that is required to hatch an egg, the more rare and powerful the Pokémon is likely to be.
While monitoring legacy applications and reducing the number of overlapping applications takes time over the long-term, the benefits to your program are also greater.
Standardizing applications, consolidating vendors, reducing maintenance on unused applications, measure vendor performance, and renegotiating contract clauses when the vendor repackages your applications are all opportunities to improve the health of your ITAM/SAM program and evolve to higher levels of maturity. These are not short-term tasks but require investment to achieve the benefits.
Having an ITAM/SAM program isn’t the latest cool bandwagon to jump on, but rather a discipline that enables the business to be more agile and flexible. Evolving your ITAM/SAM program to include IT service management, security, cloud, and IoT, is akin to building up your Pokédex.
In MEGABYTE Act Recommendations for CIOs (Part 1 of 2) of this blog series, we looked at the first three of six requirements that the MEGABYTE Act of 2016 recently put into place. We then gave our recommendations for how CIOs can achieve compliance by next year.
Here, we’ll continue with requirements four, five, and six, and give our recommendations for each one.
4. Provide training relevant to software license management
Software license management as a discipline has been growing in importance in private sector organizations for many years, and there are a number of organizations that offer training on processes, policies, metrics, business management and security requirements to fulfill demand.
The most well-known software asset management certification course is offered by the International Association of IT Asset Managers (IAITAM).
Be aware that there is a well-documented ITAM/SAM skills shortage in the marketplace, so plan for extended timeframes if recruiting from outside the agency.
Recommendation: CIOs should evaluate staff to determine if existing employees have the skillsets needed to fulfill on this new law.
This will require people who understand not only tools, but also procurement/sourcing, contracts, project and portfolio management, vendor management, performance scorecards, and enterprise architecture. All of these areas will provide feedback into software savings opportunities and lifecycle planning.
5. Establish goals and objectives for the agency software license management program
Goals and objectives that include critical success factors (CSF) and key performance indicators (KPI) are essential to designing an effective software license management program.
For example, a KPI would be the ability to respond to a software vendor audit in 30 days. A CSF would be the ability to provide a monthly or quarterly report that demonstrates compliance with the legislation.
Metrics that support these goals and objectives will disclose whether or not they are successful or failing. Time- and cost-based metrics will uncover opportunities for continuous improvement, but these need to be aligned with agency goals.
Recommendation: CIOs should evaluate existing metrics and the processes they support to determine if they are capable of supporting effective software license management. If the existing metrics aren’t comprehensive or are non-existent, a baseline inventory of the environment will be needed to create a starting point for the metrics.
6. Consider the software license management lifecycle phases
These phases include requisition, reception, deployment and maintenance, retirement, and disposal. They help with implementing effective decision making, as well as incorporating existing standards, processes, and metrics.
With the key stages of the asset lifecycle outlined in the legislation, building best practice processes, policies and metrics around each of these stages will be the priority.
The longest part of the lifecycle is the deployment and maintenance stage, so that is where the most change will be happening to the software.
Software patches, upgrades, and new releases will happen over the usable life of the software, which could be anywhere from three, five, seven, ten years or longer for ERP or highly specialized applications.
Processes and metrics need to reflect the unique characteristics of each agency and not just the generic templates leveraged from the private sector. Budgets, mission, and use of outside contractors and staffing are just a few areas that will require agency-specific process design.
LANDESK offers an ITAM/SAM attainment workshop to help customers assess their current process, policy, and governance maturity. The workshop also uncovers where the holes exist in their current program and the areas where they are currently doing well.
Recommendation: CIOs should develop processes and metrics that reflect the unique characteristics of their agency and not rely on generic templates leveraged from the private sector.
With other mandates already in place around purchasing and disposal, CIOs should place focus on the process of managing the software license entitlement and ensure that they are in compliance with software contracts.
The potential savings from an effective ITAM/SAM program, in a government agency that already has some best practices in place, could still be up to 20 percent of the management costs associated with the various assets in the first year of implementation. I’ve often see ITAM/SAM programs generate enough savings to be self-funded and have funds freed up to be allocated back to technology investment.
With the proper processes, policies, and people in place, CIOs should have no problem reporting their cost savings and risk avoidance from improved software license management practices on a regular basis.
Side note: While the MEGABYTE legislation does not apply to state and local government, there are 28 representatives from states that co-sponsored this legislation. CIOs of these states (and others) should consider how they would approach getting a handle on their software licenses and what kinds of savings those might represent to their organizations.
Be sure to check out why LANDESK was named Info-Tech’s Champion by downloading our free report below!
In light of the recent news that the MEGABYTE Act of 2016 was signed into law, we wanted to outline the law’s new requirements for CIO agencies, as well as provide our own recommendations for CIOs to achieve compliance by 2017.
In this post, we’ll take a look at the first three of six requirements.
1. Establish a comprehensive inventory
This includes 80 percent of software license spending and enterprise licenses in the agency. It is done by identifying and collecting information about software license agreements using automated discovery and inventory tools.
There is no reason why a CIO couldn’t achieve a discovery rate of 97-100 percent of all enterprise and infrastructure software installed on all endpoints and servers, including custom-developed software that is attached to the network.
Outperforming the 80 percent requirement to discover all software is not impossible.
Understanding what is running in your environment is the first step to not only managing but also ensuring that the applications meet security guidelines. Using either an agent installed on the endpoint or an agentless discovery tool that scans by IP range, it is possible to discover and build a definitive knowledge base of all known, approved applications. The data from these tools can then maintain a software asset catalog or a whitelist of approved applications.
Ideally, this discovery tool should use an agent so that it can monitor software usage. Software usage without an agent is only a point-in-time snapshot of what is running on the endpoint when the scan runs, and not daily usage monitoring. Usage monitoring on server software is not recommended because it may lead to network performance overload.
Recommendation: CIOs should ensure that they have an agent-based discovery tool that can discover all device types–mobile, workstation and server–that can also monitor software usage.
2. Regularly track and maintain software licenses
This will assist the agency with implementing decisions throughout the software license management lifecycle.
With constant monitoring of the software license lifecycle, decisions about when to adopt new versions or upgrade operating systems are much easier to make.
Knowing the TCO and the costs associated with lifecycle decisions will provide the visibility needed to assess and model a decision on factors that go beyond purchase price.
Tracking and maintaining the software lifecycle can also directly solve a problem that has plagued the government agencies–legacy applications that cost a lot to maintain when vendor support is no longer available.
We’ve seen legacy operating systems that reached end-of-life a decade ago, still being used even though there is no valid business need for it. IT transformation efforts are often bogged down because the costs to update outdated applications is often prohibitive, even for the government.
Recommendation: CIOs should begin by getting a baseline report of all installed software and its associated lifecycle. If old versions of applications are discovered and newer instances are available, determine whether an upgrade should occur and if it is covered under maintenance.
3. Analyze software usage and other data to make cost-effective decisions
Most organizations monitor software application usage on a quarterly basis to detect which applications a user has opened and closed.
If a user hasn’t launched an application within the past 90 days, there is a good chance that they don’t need the application, unless it is an application that is only utilized during specific projects or year-end timeframes.
If software is not being fully utilized, it can be reclaimed from the endpoint and redeployed to fulfill another user’s request for that same application.
If there isn’t demand for that application and there are a large number of unused licenses, the agency should consider renegotiating that contract and discontinue maintenance on those applications.
In addition, effective software usage monitoring could potentially uncover that an enterprise agreement is not a cost-effective licensing alternative because employees are not utilizing what is installed on their endpoints.
On the server side, it might indicate that there are expensive applications with overlapping functionality that are only being partially used and the least used ones could be discontinued. In the “other data” category, a history of furloughs or staff layoffs and retirement could be used when forecasting software demand and server capacity requirements.
Recommendation: CIOs should evaluate their existing discovery tools and if it has software usage capabilities, ensure that is implemented.
In many cases, inventory planners may not be aware of the functionality or may not have it fully deployed. Knowing the level of detail (e.g., app open/close or keystroke activity) that is needed for software usage monitoring is imperative.
On July 29, 2016, the MEGABYTE Act of 2016 was signed into law.
Last Friday, Public Law No: 114-210 m, also known as the Making Electronic Government Accountable By Yielding Tangible Efficiencies Act of 2016 or the MEGABYTE Act of 2016, was officially signed into law.
This new legislation directly affects all U.S. government agencies and follows on earlier legislation that has gone into effect over the past two to three years.
Why the MEGABYTE Act Is Different
Although complementary, the MEGABYTE Act differs from previous legislations such as The Federal Information Technology Acquisition Reform Act (FITARA), the National Defense Authorization Act for Fiscal Year 2015 (NDAA FY 2015), and the Office of Management and Budget (OMB) Guidelines.
While FITARA and NDAA FY2 015 focus on IT issues related to staffing, coordinated purchasing, IT hardware inventory, and other areas, the MEGABYTE Act lays out what agencies are expected to document and report.
This documentation and reporting deals specifically with IT software license savings that can be achieved with better visibility and efficiencies.
The full MEGABYTE Act of 2016 text can be found here.
Sponsored by the Committee on Oversight and Government Reform and the U.S. Senate Committee on Homeland Security & Governmental Affairs, the MEGABYTE law requires that government CIOs “of each executive agency must report to the OMB, beginning in the first fiscal year after this Act’s enactment and in each of the following five fiscal years, on the savings from improved software license management.”
The specific requirements that are laid out for agency CIOs are great steps to getting a handle on what software is installed within the agency.
MEGABYTE Act Cost Savings
In my 21 years’ experience as a Gartner Research Director advising public and private sector on IT and software asset management (SAM) programs, I’ve found that an organization without any best practices in place could yield savings of up to 30 percent in cost avoidance and savings in the first year.
Savings will decline in subsequent years as the environment is tightly managed, but the increased visibility will continue to reap savings in other IT domains.
However, OMB and GAO already have best practices in place. They have centrally negotiated contracts and pricing—not to mention a culture that adheres to policies—which will be a huge advantage as agencies begin to move into compliance with this law.
In my professional opinion, I would expect the government could save anywhere from three to five percent by monitoring the installation and usage of software, and up to 20 percent by implementing a complete ITAM/SAM program.
When you consider that OMB reported that government agencies spent $9 billion in 2015 on new software licenses, the savings from software usage monitoring and reallocation of software could be significantly more than $450 million in the first year of this five-year legislation.
LANDESK is no stranger to the importance of ITAM/SAM solutions. Learn why LANDESK was named ITAM Champion by Info-Tech this year!
After working in the IT industry for over 20 years, I’m still surprised when I come across very loose interpretations that claim to define IT asset management. To be fair, the term asset management has different meanings depending upon the audience. If I was on Wall Street, or worked in financial services, the term asset applies to stocks, bonds, real estate and other types of financial assets. When speaking to an audience that is focused on tracking all of the assets a corporation owns, they would think of it in terms of enterprise asset management. This covers the real estate, buildings, fleet, machinery, power plants, planes, basically all of the enterprise assets that are capitalized and on the balance sheet. When it comes to IT assets, we are specifically referring to those assets that enable the IT side of the business to run. In some cases, these technology assets might not be controlled by IT because it is the rare company today that doesn’t have software and hardware that is supporting the development of a product or helping their business to run more efficiently.
Once we narrow the definition down to IT-only assets, there is still confusion. Let me begin by differentiating ITAM from discovery and inventory tools. Discovery and inventory tools are used to scan the network looking for IP addresses. After one is found, it will run a scan of all installed software. If the tool uses an agent, the agent will be pre-installed on the device and a scan will be scheduled to run a specified schedule.
What is an ITAM database?
An ITAM database has three components to it – physical, financial and contractual. The physical info is collected using the discovery and inventory sources to accept data those shows what is deployed. It will also provide visibility into all IT assets that might be in a stockroom, but not yet deployed or maybe scheduled for retirement. This stockroom info is typically collected using manual processes, bar code readers or RFID systems if they are installed.
The second component of ITAM is the financial data. This data is often collected from a purchasing system or from a purchase order. It indicates purchase order #, vendor name, quantity, make and model, purchase price, depreciation, cost center, and other financial attributes that an organization might need visibility into. Tracking financial attributes about an asset is useful to understand total cost of ownership, return on investment and assign costs to projects and IT business services. It also helps an organization understand technical debt associated legacy applications, for example on the mainframe, and enable better decision making about end of life for an asset.
The third component of ITAM is the contract data. This data is often collected from the reseller directly from the vendor/supplier or from a contract management system if one is in place. It will include the information from the final negotiated version of the contract, not the iterations during negotiation. Details such as version number, license entitlement, license type, vendor SKU, training days, service levels, maintenance and other important contract facts. If it is a cloud or Software as a Service purchase, the details will include quantity, license type, device count, purchase price, whether you are bringing your own software to the cloud instance, contract timeframe to name a few.
Software and hardware asset management
Data from these sources is consolidated into a database which becomes the information hub. Regardless of whether the data is related to software, hardware or services associated with that equipment it is stored centrally. Software asset management and hardware asset management is a subset of ITAM. Without visibility into the hardware, it becomes impossible to ensure software is installed in compliance with the license agreement. Similarly, without insight into contract SLAs and integration into IT service management tools, that provides incident and problem management info, it is difficult to do effective vendor and performance management.
As with most things involving technology, definitions, interpretations and the ways we think about something will evolve over time. When artificial intelligence takes over ITAM, I’m sure the definition will evolve once again.
For the sake of this discussion, I’m going to refer to the Wikipedia definition of a performance metric.
Performance metrics measure an organization’s activities and performance. It should support a range of stakeholder needs from customers, shareholders to employees. While, traditionally, many metrics are finance based, inwardly focusing on the performance of the organization, metrics may also focus on the performance against customer requirements and value.
Without metrics that reflect what is happening in the environment, it is difficult to assess where there may be problems or where everything is functioning seamlessly. As a result, IT asset management programs will put in place basic hardware asset management metrics that track both spare parts/stockroom inventory and deployed hardware.
Since August 2015, the news headlines have been dominated by the U.S. elections, refugee crisis, dropping oil prices, fluctuating government lending rates in Japan and US, and coverage of the global slowdown that is crossing geographies and industries. China has been leading the doom-and-gloom financial forecasts based on economic outlooks for 2016.
When IT is designed to operate in domains, gaining visibility across the infrastructure is virtually impossible. There are separate teams, tools, and objectives that limit information and data sharing across the network, database, server, data center, client and other domains. However, when visibility into all of the infrastructure components that comprise business services cross those domains, the benefits to the organization are huge, especially in the area of change-impact analysis.