Tuesday, 1 September 2015

The new IT is all about the customer



Enterprises have turned outward, investing more and more in improving the customer experience -- giving those closer to customers power to make key technology decisions

A decade ago, most of the grand IT initiatives I heard about involved optimizing internal business processes and reducing the cost of IT. But the direction of technology has shifted -- to improving the customer experience and deploying new applications that are the public face of new and continually improving products.

Decentralization. Technologists, particularly developers and engineers, have become integrated into lines of business. IT no longer likes to be seen as a separate entity. Instead of remaining a static, isolated cost center, IT has spread itself into every corner of the business and associated itself with driving revenue.

Self-service. A highly centralized IT entity cannot keep up with increased demand for customer applications and improvement. Either lines of business have mechanisms to procure resources internally -- or they turn to outside resources, including professional services and cloud service providers.

Outward-facing analytics. Tracking customers goes beyond conventional trends such as seasonal demand to detailed profiling and analysis of behavior, such as that represented by Web clickstreams and analysis of social media.

Increased risk awareness. Technology has become so central to the enterprise that its failure has disastrous consequences. Outages are no longer acceptable and data breaches get CEOs fired.

So how do these four trends affect the technologies enterprises invest in? To answer that question, you need to begin by acknowledging how much more is being demanded of enterprise technology.

Focusing on customers requires a multiplicity of Web and mobile applications that can change continually and scale at the drop of a hat. Building the infrastructure and recruiting the human capital to execute on that endeavor now consumes more and more of the technology spend.

Moreover, enterprises can no longer turn a blind eye to substandard enterprise applications for their own employees. In particular, sales and field service personnel need highly usable applications that can be modified easily as customer needs change, while business analysts need self-service access to analytics, rather than waiting for reports from business intelligence specialists.

To meet this rising demand, applications must be built using preexisting parts rather than from scratch. Those parts have several sources, and in most cases, developers are choosing which to use:

Shared services. Today many Web and mobile applications are built using microservices architecture. Instead of building monolithic applications with all sorts of internal dependencies, you create an array of shared, API-accessible services that can be used as building blocks for many different applications.

Open source code. GitHub and other cloud repositories enable developers to share and consume code for almost any purpose imaginable. This reflects today's practical, non-ideological open source culture: Why code it yourself if someone else is offering it free under the most liberal license imaginable?

Cloud APIs. Cloud service APIs from Google, Facebook, LinkedIn, and PayPal have become stalwarts for Web and mobile developers -- and are easily integrated into microservices architectures. Hot new APIs like those offered by Stripe for e-payments emerge all the time, along with more specialized plays such as the popular Twilio for telecom services.

Frameworks everywhere. Programming frameworks, available for all the popular languages, free developers from having to worry about the nonessential details of application development. Choosing the right programming frameworks for the job has become so critical, they've even been referred to as the new programming languages.

Wrapped around these prebuilt elements are modern development approaches such as agile methodology, which stipulates a recursive, piece-by-piece development process that continually solicits feedback from business stakeholders. Devops tools enable developers to provision their own virtual infrastructure -- or, alternatively, have operations reconfigure dev and test environments faster.

Underlying this new, high-speed application assembly line is cloud infrastructure. Wildly unpredictable fluctuations in the number of public users, as well as demands on shared services that may be used by many applications, requires an infrastructure that can pour on compute, storage, or network resources as needed.

For customer-facing applications, cloud has become the default. In most cases, enterprises are turning to public IaaS or PaaS providers such as Amazon Web Services or Microsoft Azure rather than trying to build private clouds from scratch.

Perhaps the most profitable area of big data involves gathering clickstream data about user behavior to optimize applications and make it easier to, say, compare and purchase products through an e-commerce application. Big Web companies such as Yahoo are way ahead in this area, with petabytes of data on HDFS to support mobile, search, advertising, personalization, media, and communications efforts.

Enterprises are pouring money into Hadoop, Spark, and Storm deployments -- as well technologies such as Hive or Impala that enable you to query Hadoop using SQL. The most exciting area today, however, is streaming analytics, where events are processed in near real time rather than in batches -- using clusters of servers packed with huge amounts of memory. The Storm-plus-Kafka combination is emerging as a popular streaming solution, but there are literally dozens of open source projects in the Hadoop ecosystem to experiment with.

Enterprise adoption of these new analytics solutions tends to be somewhat haphazard. Some enterprises encounter problems managing Hadoop at scale; others experiment without clear objectives, resulting in initiatives that never get off the ground. Still others may roll out unconnected projects using similar technology and duplicate their efforts unnecessarily. To avoid the latter case, InfoWorld's Andrew Oliver notes that deploying "Hadoop as a service" is becoming a common pattern: With sufficient preparation, various business units can obtain Hadoop analytics self-service style from a large, centralized, scalable hub.

While most IT decision-making is no longer top down, getting serious about security needs to come from the top. That's because security almost always has a negative effect on productivity -- adding more steps to go through -- and diverts technology resources toward fixing vulnerabilities and away from meeting business goals.

But as many enterprises have learned the hard way, you can focus on user experience all you like, but if a data breach exposes customers' personal information, your brand may never be trusted again.

Making security a high priority needs to come from the C-suite, because you can't break security bottlenecks without it. For example, unpatched systems are the number one vulnerability in almost all enterprises. It would seem relatively simple to establish a program to roll out patches as they arrive, at the very least for high-risk software such as Java, Flash, or Acrobat. But in most enterprises, systems remain unpatched because certain applications rely on older software versions.

You need to carry a big stick to convince a line of business manager to rewrite or replace an application because it's too much of a security risk.

In security, best practices -- such as prompt patching and up-to-date user training -- trump technology every time, but certain security technologies have more impact than others:

Multifactor authentication. Fingerprints, face scans, or sending codes via text message to a user's mobile phone all decrease the likelihood intruders can take over an endpoint and gain access to the network.

Network monitoring. First, get to know the normal network traffic flow. Then set up alerts when that flow deviates from the norm -- and you may catch data being exfiltrated from your network.

Encryption by default.  Processing power has become so abundant that sensitive data can be encrypted at rest. The bad guys may be able to steal it, but they can't do anything with it.

Again, all these security measures spin cycles that could be applied to more competitive endeavors that please customers and drive revenue. That's why senior management needs to enforce their implementation and penalize those who fail to comply. It's a lot better than picking up the pieces after a horrific data breach.

While technology decision-making has become more decentralized, security isn't the only area where central control still plays an important role.

The great risk of decentralized IT is a balkanized organization. It's one thing to empower people at the line-of-business or developer level to choose the technology they need to serve customers best. But if they're creating their own data stores in the cloud without considering how that data needs to be consilidated, or building systems that are redundant with others, or signing disasterous contracts with providers...then the agility of decentralization descends into chaos.

The answer is an architectural framework that empowers people but at the same time prevents them from making bad decisions. You need to enforce best practices and promote pre-approved services and technologies -- while giving stakeholders the latitude to experiment and the processes to evaluate exciting new solutions as they come along. That governance needs the full force of senior management behind it.

Today, IT is more federated than centralized, and it needs to be that way to serve customers best. But the policies established at the heart of IT are more important than ever, because they're what holds that federation together.

Better IT support could make users more ignorant


Researchers and theorists may argue about the strategic value Information Technology departments make to large organizations – or even whether tech prevents social change rather than encouraging it.

But few question the value of IT support to end users – who are often reputed to be more tech savvy than their predecessors, but who may also insist the tech they need to understand has more to do with getting video or cloud apps on their iPhones, not fundamentals like how to get back online if someone kicks a plug out of the wall.

Close support may improve productivity, but it could also create a dependency that makes end users more ignorant and less competent on tech issues in the long run, according to a new study on the dynamics of geek-human relationships and the unfortunate tendency to let experts do things for you.

In family groups, though probably in others as well, having one member who is technically skilled generally does not encourage others to learn more about the tech they all use, according to the study from researcher Erika Poole, assistant professor of Information Sciences and Technology at Pennsylvania State University.

Penn State researcher Erika Pool looks for healthy patterns inthe use of technology but found some tPatrick Mansell, PSU
Poole studied the tech-related behavior of 10 families asked to keep a log of every tech-related interaction or event for several weeks, in each of which Poole would introduce a new technical challenge such as asking the family to set up and use an iPod.

Usually that task fell to the more tech-savvy of the adults in the household, rather than post-Millenial children, for example.

When the task fell to the non-technical partner, however, Poole identified a consistent problem: Non-technical members of the household often had surprising difficulty with even relatively simple technical tasks, and reported avoiding or ignoring them for fear of getting stuck and feeling like a burden when they had to ask for help.
The more geekish members of the household tended to pick up a technical task and race through it without asking about the configuration preferences of other household members or explaining what was being done so other members of the household could modify or repeat the process on their own.

"They might make decisions about computer settings, for example, without asking the other’s opinion," Poole said.

Depending on others for even simple technical help can make non-geeks even more reluctant to ask for help or training, trapping both them and their significant, or to be taught how to do things themselves, leaving them far less able to deal with their own technical problems than they would have been otherwise, and trapping tech-savvy members of the group into permanent tech-support roles.

It's only when the tech-supporting partner was absent that the non-geeks were pushed to learn anything.

"When they put aside their initial reluctance to do an unfamiliar task, their self-confidence ended up increasing when they finally tried it and figured it out", Poole said in a Penn State announcement of the study.

The study didn't answer any questions about how to train tech-aversive end users more effectively. It did raise questions about what constitutes technical competence at a time when it's not unusual for end users integrate and use half a dozen cloud or mobile services to get a job done, but be stumped a the need to reboot – or even find – the home router that lets them touch the cloud in the first place. 

"Tech is becoming more important everywhere, but not everyone needs to be on the level of a systems administrator," Poole said in the announcement. "I wish I could say there’s a set list of skills that everyone needs to know, but it’s a very individual thing. It’s about learning what you need to know to navigate the technology that’s important to you."

Unfortunately, the ease of use, 24/7 availability and support for a range of mobile devices that is de rigeur for cloud apps also seems to have raised end-user expectations for IT as well.

Internally produced apps and the interfaces to automated support have to be simple, support has to be 24x7, support multiple devices and be available through phone, email, web, chat and other services, according to a June, 2014 Help Desk Institute study comparing expectations of IT support groups and end users.

End-user support organizations are trying to meet those demands, but also admit they spend budget and work-hours on things that are important to them – security, reliability and efficiency of support tools and networks, for example – that are not important to end users.

And the focus is still on fixing the problem or answering the question – not on teaching end users to do things themselves.

Part of that is self defensive. You may only feed people for a day if you hand them a fish rather than teach them to fish, but at least your help desk won't be flooded with complaints about hooked fingers, tangled lines and fish that inexplicably burst into flames and exploded despite the user doing "exactly what you told me to."

It's simply easier to fix something for someone than it is to teach them to do it themselves, especially if they don't particularly want to learn.

If following that path doesn't produce users even more ignorant about the technology infrastructure on which everything they do depends, it will only be because it's not possible to fit any more ignorance into that particular group of users.

It's not all ignorance, of course. Many users consider themselves "tech savvy" because they know how to provision, integrate and use cloud apps even knowing nothing at all about the infrastructure underneath.

That's okay(ish). The cloud exists specifically to hide the ugly reality of infrastructure from the delicate sensibilities of users. It's usually not a good idea to ask end users to do too much of their own tech work, anyway.

And it is genuinely a good thing that corporate IT has finally recognized external clouds sufficiently to begin talking support for multiplatform datacenters, hybrid cloud implementations, federated security and flexible, dynamic security – as well as 24/7 multiplatform support for end users.

It's just that the good IT does by handholding and coddling end users in an effort to get their work done also makes them more ignorant about the tools and infrastructure they're using to try to continuously raise their productivity.

At some point the combination of ignorance, co-dependence not-always-reliable technology is going to combine into something really ugly, broken and helpless for end users, who may not even recognize enough about it to know who they should call to come fix it.









Windows 10 hits the 75-million mark


Third-party data supports Microsoft's claim that Windows 10 is off to a strong start


Microsoft this week bumped up its claim of Windows 10 devices to 75 million when a company executive tweeted that figure Wednesday.

"More than 75 million devices running Windows 10 -- and growing every day," Yusuf Mehdi, corporate vice president of Microsoft's Windows and Devices division, said on Twitter.

Mehdi had last updated the official Windows 10 tally on July 30, when he said 14 million devices were running the new OS. Microsoft began triggering on-screen Windows 10 upgrade notifications on PCs running Windows 7 and 8.1 on July 29.

While the 75 million cannot be independently verified -- Microsoft is likely citing the number of Windows 10 "activations," the check-in the OS does with Redmond's servers when it's first run to verify that it is linked with a valid product key -- it is in the ballpark of third-party estimates.

Data provided to Computerworld earlier this month by analytics vendor Net Applications showed that by its calculations 3% of all Windows-powered personal computers ran Windows 10 during the week of Aug. 2-8. That 3% translated into approximately 45 million devices, assuming there are 1.5 billion Windows systems worldwide, the latter number one that Microsoft itself has repeatedly used.

It's not unreasonable to think that Microsoft has added another 30 million copies of Windows 10 to the running total in the three weeks since. (Net Applications has declined to provide more recent weekly user share data, saying that its engineers had disabled weekly reporting because they were revamping the back-end infrastructure to prep a new service.)

Another analytics vendor, Dublin-based StatCounter, which tracks a different metric, has posted data that also appears to dovetails with Microsoft's 75-million device claim. The growth of StatCounter's usage share for Windows 10 -- a measurement of Internet activity -- since July 30 closely matches the increase Microsoft claimed.

The growth rate from 14 million to 75 million -- Mehdi's numbers -- represented a very strong 436%, give or take a decimal point.

Likewise, growth in StatCounter's Windows 10 usage share from the 1.34% on July 30 (when Mehdi touted 14 million near day's end) to the high water mark of 7.26% on Sunday, Aug. 23, was an almost-the-same 441%.
Microsoft's 75 million was significantly larger than similar boasts in 2012 -- and earlier, compared to how fast previous Windows' editions left the gate. That's not a shock; everyone has expected a stronger uptake because Windows 10 is a free upgrade. Prior Windows editions were not.

In 2012, for example, Microsoft said it took Windows 8 a month to crack the 40-million-licenses-sold mark. In 2010, a little more than four months after Windows 7 debuted, the developer said 90 million licenses of that OS had been sold.

StatCounter's data supports the idea that Windows 10's start has been a record-setter for Microsoft. Windows 10's usage share after 30 days, for instance, was 37% higher than that of Windows 7 after its first 30 days of availability.

Microsoft has set a goal of putting Windows 10 on 1 billion devices within three years: The 75 million represents 7.5% of that target.

It is a good start.