About the Author

Rex McMillan | Product Manager

Windows 10 and Enterprises: Top Reasons They Won’t Make the Switch

Businessman working at desk in officeThere is no denying the success of Windows 10; it has had a great adoption rate, surpassing the adoptions of both Windows 8 and Windows 8.1.

But as with everything in life, the data gets more interesting as you delve into some of the finer points.

The two factors that have really helped accelerate the adoption of Windows 10 are the return of the start menu and the free upgrade.

So the question is, who is adopting Windows 10 and who isn’t?

Softchoice has published statistics on their customer base, some of which are:

  • Less than one percent of devices in 169 North American companies are using Windows 10.
  • 91 percent of systems are running Windows 7—an 18 percent increase over last year.

Additional data from both StatCounter and Netmarketshare show that the percentage of Windows 10 devices on the internet tends to spike over the weekend, indicating that consumers have been the largest users of Windows 10 and that many enterprises have not started the migration yet.

What are the main IT concerns and how are the migrations going? 

Spiceworks conducted a survey with results from over 900 IT professionals. The data revealed something very interesting:

  • 85 percent of companies that have deployed Windows 10 are generally satisfied, but Windows 7 is still getting higher end-user satisfaction.

Companies that had started adopting Windows 10 were asked to list their top challenges. Compatibility of software and hardware, as well as migration time, were listed as the biggest challenges.

What is stopping enterprises from migrating?

Over the past year, we have had many discussions with enterprise companies about their plans, concerns, and expectations. It seems that the IT professionals have been correct in identifying the biggest challenges, wins, and roadblocks companies are facing.

The recurring themes that we have heard from IT regarding the adoption of Windows 10 involve application compatibility, migration issues, and Windows updates. The larger enterprises always face the most compatibility issues; they know this and are always having to work to limit the risks in this area.

According to the Spiceworks survey, 62 percent of companies had not started any Windows 10 implementations. Top reasons companies are delaying include the fact that many of them are satisfied with current OS, they are concerned about compatibility issues, and they want control over Windows updates.

There is also a common theme of how to make sure the end-user is satisfied with their computing experience and that they can be productive.

Windows cumulative update model

The cumulative update model of Windows has been discussed, namely, how it increases the application compatibility risks.

Enterprises will be forced to choose between not patching or having an application broken due to the patch for at least 30 days if Microsoft has to make the change or until a third-party vendor can make a change.

This discussion has caused many IT professionals great concern and has impeded many people’s decision to move to Windows 10. An interesting twist was announced last week, that Windows 7 and 8.1 will be moved to this patch model in October.

Does this refuel the Windows 10 migrations or does it just add an additional application testing tax on IT departments that will slow the adoption of patches?

Clearly, the above data shows that IT professionals in the enterprise are approaching Windows 10 with caution and concerns.

Blog-CTA-Whitepaper-527x150

Lost Assets and Rogue devices Part I

Rouge Devices

Have you ever misplaced something? Of course you have, we all have. When this occurs, the realization that you have lost something or, even worse, that someone else may be in possession of that missing item, causes us to have various reactions from anger to fear and sadness. These reactions cause us to start the next phase of actions which may be anything from panic to a methodical search and recovery.

I have seen or experienced many of the different reactions that occur when I’ve lost an asset.

Different types of assets require a difference in the ways that we react to a loss or even a rogue asset. There was a time when I was listening to some ranchers discuss their assets (sheep and cattle), it was the time of year that they were gathering their assets from the vast western ranges that they use to graze their stock on. During the conversation, one of the ranchers stated, “Don’t worry we will find all of the sheep, we have them fully contained, there is no way for them to escape. The Atlantic is on one side and the Pacific is on the other side, how could we possibly come up short?” After much laughter about how impossible it would be if that were the actual boundaries, it became obvious that, at times, that is how IT assets are managed.

Later in the discussion, another mentioned he was missing a few of his assets, he was sure that they were still on location but he just couldn’t find them, every day we would look for them and the next day they would have moved to a different part of the place in the night. Sometimes, in IT, we have the same type of experience with roaming users and their IT assets.

IT asset management starts with a complete inventory of what is in the environment, fortunately IT assets can’t just walk away without help. Many of the technologies that we use to find IT assets may lead us to a similar experience as the rancher looking for his cows, sometimes they are on and other times they are gone. Active discovery of network-attached devices is prone to the same misleading results as looking in different parts of the pasture each day. While the assets are still there, we are unable to know this if we aren’t at the right spot at the right time. Passive network discovery technologies allow us to know when any asset is on the network and reports this. Automated passive discovery is the first step in completing our knowledge about what assets are in the environment, as well as exposing the assets that should not be in there. Rogue devices can bring significant risks to the business and can have devastating impacts.

Implementation of a passive discovery tool should be a high priority to all IT that want to ensure that they are properly managing their assets and securing their environments from unknown risks.

When the Hell are you Going to Patch?

178481370Growing up in a small town, it isn’t uncommon to find livestock (or farm animals) on the road. When this happens, the proper and neighborly thing to do is to get the animal off the road so everyone is safe.

Driving late one night, I saw a black shadow by the side of the road.  Assuming the shadow was a horse, I quickly stopped.  Sure enough, there were six horses on the road.  We found a gate and carefully herded the horses into the pasture.  By the time we accomplished this, there were three cars stopped, along with a sheriff.  None of us knew who the owner was, or even if it was the correct field, but soon the horses were safe in the field and everything was secured. .

The sheriff told us all to move along so we didn’t create a hazard, but having had experience with escaped livestock, we did a quick U-turn and drove slowly back up the road.  Quickly checking the rest of the fence, we found a gate that was ajar in the far corner.  We stopped and closed that gate.   While closing the gate, the sheriff pulled up and wanted to know why we hadn’t left, we had to explain that by just remediating the current issue and not checking for the hole in the fence, would have allowed the horses to be back on the road in a matter of minutes.

This story came to mind the other day while I was discussing a security issue with an enterprise admin and some consultants.  They were discussing how they had been fighting a virus for two weeks. It was polymorphic and was morphing faster than the AV definitions could be released.  The AV definitions were not keeping up with the transformations. The network was being brought to its knees, network outages had been plaguing the business due to the virus and the traffic generated.  They were frustrated with their AV vendor.  We discussed the challenges and technologies that could be implemented to stop this virus.  They had a comprehensive plan that included new AV definitions, application blocking to block the files that were known bad, leveraging network devices blocking ports, poisoned routes and custom layer 7 switch rules, etc.

After agreeing that they had been using a good plan, I had to ask, “When the Hell are you going to patch the vulnerability?”  They looked at me like I was in the wrong meeting.  I explained my thinking.  If the virus could propagate like this, there almost certainly had to be a vulnerability that it was leveraging.  We just needed to find the vulnerability and patch it.  If there wasn’t a patch, we needed to have the vendor create one.  As it turned out, there was a patch.  Of course with every patch comes some risk, the risks of application compatibility, the unexpected glitches caused by a failed patch, or just plain old operating system issues.  After analyzing the risk factors, we were able to deploy the patch.  By the end of the day, we had the virus squashed to the level of an annoyance with no more network outages.  All of the best laid plans were not needed if we just patched the hole.  This incident triggered a more robust patch testing cycle that has allowed them to shorten the patch deployment time.

When I was a kid, there was an old cowboy that asked me if I was good at math. I told him I was of course, so he asked: “If there are 10 sheep in a field and one gets out, how many are left in the field?”  I knew the answer and quickly responded with “nine.”  He chuckled and said, “Not if you haven’t patched the hole in the fence.  By the time you did the math, more sheep got out.”    Patching is the best tool to stop the spread.

 

 

Vegas or Bust: Tackling Resource Constraints

Vegas-Blog-300x200

In years past, I was considering a major home improvement project. While debating the pro and cons of each approach to the project, the discussion centered around how much to do myself and how much would it cost. During these discussions, one of my friends made the statement,  “Which do you have more of, time or money?” Just like the home improvement project, IT projects have the same resource constraints, time and money. It seems that we can always find one but its tough to find both.

With 80% of IT budget being used for KTLO (keeping the lights on), and reduced headcounts, IT is facing the same constraints of time and budget. As an IT organization, we have to determine what we have least of. We have now identified the biggest resource constraints. After we have identified where we are constrained, we then have to have a strategy to resolve this.

In personal finance there are a number of theories and people who pitch them. We have Smallest Payment First Debt Snowball Method (aka The Dave Ramsey Debt Payoff Method) or we can use leverage to grow wealth. Both of these strategies work and both have limitations. The first step is to pick a strategy and to use continuous processes to improve our implementation of that strategy. In IT, we have the same challenges and we need to pick a strategy. We know that there are constraints so does the strategy align and work with in these constraints.

If we are going with the snowball method, we should identify which resource is most constrained, (time or money) and then identify some low hanging fruit. Even if the gains are small, they can then be applied to help remove the next constraint snowballing the effect. If we find that time (number of man hours) is the biggest constraint then we need to identify what can be automated, what can be moved to a self-service, and even outsourced. As we identify these items, we will have a tendency to just want to go after the biggest gain first. But its like paying off a mortgage, most of the time, its more than we can accomplish in a reasonable time frame and we end up getting frustrated. But if we can identify small debts that can be paid off (automated, moved to self-service, outsourced or just removed from support) then we get a larger pool of resources to tackle the next debt.

Well its time to pick a strategy. Will you leverage up? Borrow some time (usually working overtime) and spend all the budget on a new project that will return huge? Or will you look for constraints, pay off a small debts and reinvest in paying down more debts until you have a rich portfolio of time and money?

As we planned, worked on and then released LDMS 9.6, we continually discussed was what we could implement that would ensure that customers have a greater return on their investment, in both time and budget.  I am excited about this release and the features that enable us to save time and money.

Well I am off to Las Vegas, to invest in the latest high return scheme. I am sure that it will pay of this time! Chime in on your strategy, what’s working? What’s not?

 

 

 

 

 

 

 

Don’t focus on the hype of Shadow IT, focus on the user and ROI

Shadow IT ROIWith the current trend of the industry,  IT professionals have the opportunity to impact businesses more than can be imaged. As we have seen from the Target security breach, a failure in IT can cost millions but the more typical issue is that a failure in IT just impedes the business. Shadow IT is the current buzz word for departments creating their own IT solutions. In one of my past lives, I was part of a shadow IT project. We determined that for our department to be successful, it was imperative that we proceed with a project before our turn in the backlog. IT was willing to support the project and wanted to see us be successful. They just didn’t have time or resources to support the project. This project occurred years ago, long before shadow IT was a trend and point of discussion.

Just In Time

User productivity is paramount to all of our businesses. While the platforms change, the OS’s are upgraded, applications morph, the constant is that we need to provide the user with the tools they need to be productive and efficient.

A while back, I was invited to visit with an IT department.  During the conversations it came out that they were in a very reactive mode and they also had a large set of projects that were in progress.  All of the projects discussed were given a Priority 1 status.  On top of all that, the C-level attendees discussed how to innovate the IT department.  Sound familiar?  In today’s economic state, IT departments are having to find ways to do more with less while BYOD, consumerization, and other trends can increase some of the challenges for IT.

While having the discussion I kept thinking about some of the experiences I had while working in support.  Everyone’s issue was the highest priority (at least to them).  Time is a precious resource and always in limited supply.  At times support is in a very reactive mode and firefighting is the name of the game. But we all know that this is not a sustainable model and that it ultimately leads to customer dissatisfaction.

During this time I was introduced to a model that described how to transition from reactive to proactive, then from proactive to trusted advisor.  The model discusses Just in time (JIT).  JIT is a concept that means we need information or changes much quicker to be efficient.   JIT knowledge management would indicate that knowledge-based articles are created within a few hours of the issue not a month later or 6 weeks later.  The whole point of JIT is productivity and efficiency.

With all of the changes and challenges, what constants are there for the IT department?  The one constant is the user.  User productivity is paramount to all of our businesses.  While the platforms change, the OS’s are upgraded, applications morph, the constant is that we need to provide the user with the tools they need to be productive and efficient.  User Oriented IT helps us keep an eye on the ball.  User productivity is paramount if IT is to become a business enabler and innovator.

As we work on the new features in the next LDMS release, we’ve put even more these thoughts and principles into our product.

Stay tuned. You’re going to love the results.

Tablets: Are they a Desktop, Mobile Device or Something Else?

Selecting the management tool should be based on the business problems, risks, compliance factors, and how to best enable the end user–not form factor.

Over the last few weeks we’ve seen an increasing number of new form factors were released including  Apple’s iPad mini, Microsoft’s Surface, and HP’s EliteBook. Many more devices will be released over the next few months.  This leaves many IT administrators wondering what’s the best way to manage each of these new devices.

Management and support choices should be based on risk, compliance, and functionality needs.  While we discuss form factor and the type of device, it’s the architecture of the device really dictates management needs.  For example, if you have a tablet that runs Windows 7, it comes with all of the vulnerabilities and risks that a Windows 7 desktop or laptop has.

So when it comes to managing new or existing devices in your environment ask yourself the following questions:

  • What risks must be addressed in the architecture?
  • What is required to secure the device?
  • What programs are required on the device to make the user productive?
  • How do we resolve issues on these machines remotely as quickly or more quickly than if we were at the device?

Selecting the management tool should be based on the business problems, risks, compliance factors, and how to best enable the end user.  Form factor helps in deciding what hardware to purchase but shouldn’t influence how to manage the device.

Whyyyyy Gen Y?

As Generation Y (Gen Y) continues to infiltrate companies and organizations around the globe, we’re seeing the dramatic changes they’re wreaking and the impact that has on IT and IT operations.

The good news is that Gen Y comes with skills that can really enable their employers to be successful. Their understanding of social media, brand awareness, blogging, and living in an always-connected state are paramount for many businesses to be successful in the current and future business world.

Even with all the good things they bring to an organization, they’re causing massive problems for IT (as well as writing your bio and revealing many personal secrets).

Because this generation lives in an always-connected state, they use the smartphones, tablets, and other devices they want to use and instead of devices that meet a “corporate standard”. They’re the main driver behind the BYOD (Bring Your Own Device) initiative.  Just look at the number of pre-orders for the iPhone 5. This consumerization of work devices will continue to impact the IT department in many different ways.

One way is cost. For example, according to the Aberdeen Group, “enterprises deploying a BYOD strategy will spend around $170,000 annually for every thousand personal devices that are part of a BYOD program. That number is more than the traditional company issued mobility strategy costs.”  To avoid these higher costs and actually save money with BYOD, you have to have the right mobile device management processes in place. Without them BYOD can turn into an IT nightmare.

In addition, the connectedness that comes with personal devices is one giant security nightmare. IT administrators it will be required to strong systems and security management processes in place to address these concerns—otherwise they’re looking at the potential loss of vital company and customer information. Thankfully, LANDesk has a solution that addresses both management and security concerns.

Generation Y brings both challenges and expertise to IT departments everywhere. Providing IT can keep up with them, their departments will be stronger, more secure, and better managed in the end.

The 5 Biggest IT Headaches

Despite your valiant efforts, you better have some ibuprofen, caffeine and a cold compress ready as you draw the curtains closed and prepare for the five biggest IT headaches.

No matter how much you think you know, working in IT will teach you that you don’t know it all.  In this rewarding, but very challenging field, there will always be a case where things just don’t work out like they’re supposed to.  Despite your valiant efforts, you better have some ibuprofen, caffeine and a cold compress ready as you draw the curtains closed and prepare for the five biggest IT headaches.

  1. The Users– The first headache to note is the “users” themselves.  From user error, to lack of technological knowledge, users that don’t know how to run certain programs or systems tend to try and fix things themselves.  This usually leads to bigger problems since they are not taking the correct steps to fix the problem.  Anyone in IT can recall countless situations in which users click on banners that should never be clicked or open that email attachment that looks suspicious.  Users are becoming more technical overall but that just means they create more technical problems.  User error is definitely one of the top ranking headaches for IT; however, looking on the bright side, job security is always something to look forward to!
  2. Viruses/ Malicious Software– It seems as though we’re always hearing about a new virus or malicious software that has deviously attacked and infected thousands of computers.  As technology becomes increasingly integrated into our daily lives, it is imperative for users to know how to defend their devices against attack.  Not too long ago we had a client that acquired a virus that infected their whole network and every PC attached to it.  It took them down for a whole day and a half and we had to put in some serious hours to wipe the slate clean.  Making sure that your system is fully protected is crucial, especially when you’re dealing with time sensitive contracts and deadlines in which you can’t afford to be down for long time frames.
  3. Backups/backup management– Users constantly complain about backups taking too long or running at the wrong times.  An issue can arise when we think something is getting backed up only to have the system crash, and then realize that certain files were never backed up in the first place.  This reliability issue is a huge complaint and will cause a big headache in the data recovery process, if it’s even possible.  Therefore, things like application control and file encryption help prevent backup issues from surfacing in the first place, but be sure to backup as well.  Backups come in all shapes and sizes, and there are backup management solutions to suit everyone.  Sifting through all of your options can be overwhelming, and although cloud based options like Dropbox add convenience to the IT department and the user, they can create additional headaches such as the wrong files being put into an unsecure container in the cloud.
  4. Patch management– We’ve all dealt with the issue of updating your software or system, which causes certain programs stop working or start doing some funny things.  We worked with a client that ran their Windows updates and afterward they could no longer use Outlook.  It ended up being something as simple as a licensing issue, which one of the updates had changed for some odd reason, but it took forever to research and fix.  One of the largest challenges with patches is the inability to recognize or identify a patch that will create issues.  There have been many circumstances in which we have installed a patch to a driver on graphics cards just to have the PC crash anytime a graphic extensive program is run.  This will be a constant battle for all in the IT field.
  5. Outdated hardware integration/ Compatibility with new software– We all deal with these types of problems constantly.  From older software that doesn’t work on a new OS to finding compatible drivers for newly released hardware, sometimes it is a struggle to get everything to work.  In some cases it may be beneficial to continue using your older software or OS, since the newer product may have too many bugs, is completely incompatible or just won’t run.  We have clients that still utilize Windows XP for their critical systems since the software company never released any updates or patches to make it compatible with newer OS.  You may even see cases in which the company that produced the software is no longer in business, thus making it unable to ever upgrade to a better system.  These issues can be extremely frustrating and time consuming when trying to research and find better solutions.

We can only dream that one day these problems won’t exist.  In a perfect world, everyone would work together to make cross compatible system integration and we would have a unified tech world.  But until then, we’ll keep the aspirin close by.

The Importance of Software Discovery

If rights are not restricted, software can be brought into the company by just about anyone. For IT departments, it can be very difficult to see what and when software was added.   If organizations don’t know what software is installed, there’s the possible risk of financial penalties because of a failed audit.

In a recently-published Gartner study, 65 percent of 228 participants who attended Gartner’s 2011 “IT Financial, Procurement and Asset Management Summit” had undergone a license audit in the previous 12 months and many of them had been audited twice.  While this study does not indicate what the actual risk of audit is, the evidence show that there is increased vendor activity in the audit front.  While risk avoidance is important, it is also important to get find proactive way to manage the software costs through software asset management (SAM).

As the risks and rewards are considered, SAM is a practice that most companies should implement. A comprehensive SAM policy can help organizations:

  1. Reduce costs for software license and maintenance fees.
  2. The ability to ensure preparedness for an audit with the goal of improved vendor relations and avoidance of costly fines.
  3. Drives IT to focus on corporate goals of profitability and growth,

In any SAM implementation one of the first and reoccurring tasks is discovery of software assets.  While it seems like software discovery should be a simple task, it isn’t. There are many types of software, software installers, and different ways to register or not register the OS. Even a single product may have different identifiers because of different licensing models, release methods, or other subtle differences.

Discovery of software assets can be accomplished in different ways but methodology comes with challenges such as content based does not account for specialized software. I have seen instances of very unique and specialized software that is only used in a very specific vertical market but this software can be some of the most costly in the enterprise.  File based is subject to the files being changed with patches.

As LANDesk we work on leveraging the best of all methods available to discover all the software assets in your environment including MSI database, shortcuts, exes that have run, and custom rules.  Customer specific custom rules can also be created leveraging any data found in the LANDesk database.  With the hybrid approach to software discovery, current beta testing is showing that we have been able to successfully and accurately discover most software in an enterprise environment without customization.  The dynamic nature of automatically discovered products allows you to always know what is in your environment and help give you peace of mind should an audit arise.