Enterprises have turned outward, investing more and more in improving the customer experience -- giving those closer to customers power to make key technology decisions
A decade ago, most of the grand IT initiatives I heard about involved optimizing internal business processes and reducing the cost of IT. But the direction of technology has shifted -- to improving the customer experience and deploying new applications that are the public face of new and continually improving products.
Decentralization. Technologists, particularly developers and engineers, have become integrated into lines of business. IT no longer likes to be seen as a separate entity. Instead of remaining a static, isolated cost center, IT has spread itself into every corner of the business and associated itself with driving revenue.
Self-service. A highly centralized IT entity cannot keep up with increased demand for customer applications and improvement. Either lines of business have mechanisms to procure resources internally -- or they turn to outside resources, including professional services and cloud service providers.
Outward-facing analytics. Tracking customers goes beyond conventional trends such as seasonal demand to detailed profiling and analysis of behavior, such as that represented by Web clickstreams and analysis of social media.
Increased risk awareness. Technology has become so central to the enterprise that its failure has disastrous consequences. Outages are no longer acceptable and data breaches get CEOs fired.
So how do these four trends affect the technologies enterprises invest in? To answer that question, you need to begin by acknowledging how much more is being demanded of enterprise technology.
Focusing on customers requires a multiplicity of Web and mobile applications that can change continually and scale at the drop of a hat. Building the infrastructure and recruiting the human capital to execute on that endeavor now consumes more and more of the technology spend.
Moreover, enterprises can no longer turn a blind eye to substandard enterprise applications for their own employees. In particular, sales and field service personnel need highly usable applications that can be modified easily as customer needs change, while business analysts need self-service access to analytics, rather than waiting for reports from business intelligence specialists.
To meet this rising demand, applications must be built using preexisting parts rather than from scratch. Those parts have several sources, and in most cases, developers are choosing which to use:
Shared services. Today many Web and mobile applications are built using microservices architecture. Instead of building monolithic applications with all sorts of internal dependencies, you create an array of shared, API-accessible services that can be used as building blocks for many different applications.
Open source code. GitHub and other cloud repositories enable developers to share and consume code for almost any purpose imaginable. This reflects today's practical, non-ideological open source culture: Why code it yourself if someone else is offering it free under the most liberal license imaginable?
Cloud APIs. Cloud service APIs from Google, Facebook, LinkedIn, and PayPal have become stalwarts for Web and mobile developers -- and are easily integrated into microservices architectures. Hot new APIs like those offered by Stripe for e-payments emerge all the time, along with more specialized plays such as the popular Twilio for telecom services.
Frameworks everywhere. Programming frameworks, available for all the popular languages, free developers from having to worry about the nonessential details of application development. Choosing the right programming frameworks for the job has become so critical, they've even been referred to as the new programming languages.
Wrapped around these prebuilt elements are modern development approaches such as agile methodology, which stipulates a recursive, piece-by-piece development process that continually solicits feedback from business stakeholders. Devops tools enable developers to provision their own virtual infrastructure -- or, alternatively, have operations reconfigure dev and test environments faster.
Underlying this new, high-speed application assembly line is cloud infrastructure. Wildly unpredictable fluctuations in the number of public users, as well as demands on shared services that may be used by many applications, requires an infrastructure that can pour on compute, storage, or network resources as needed.
For customer-facing applications, cloud has become the default. In most cases, enterprises are turning to public IaaS or PaaS providers such as Amazon Web Services or Microsoft Azure rather than trying to build private clouds from scratch.
Perhaps the most profitable area of big data involves gathering clickstream data about user behavior to optimize applications and make it easier to, say, compare and purchase products through an e-commerce application. Big Web companies such as Yahoo are way ahead in this area, with petabytes of data on HDFS to support mobile, search, advertising, personalization, media, and communications efforts.
Enterprises are pouring money into Hadoop, Spark, and Storm deployments -- as well technologies such as Hive or Impala that enable you to query Hadoop using SQL. The most exciting area today, however, is streaming analytics, where events are processed in near real time rather than in batches -- using clusters of servers packed with huge amounts of memory. The Storm-plus-Kafka combination is emerging as a popular streaming solution, but there are literally dozens of open source projects in the Hadoop ecosystem to experiment with.
Enterprise adoption of these new analytics solutions tends to be somewhat haphazard. Some enterprises encounter problems managing Hadoop at scale; others experiment without clear objectives, resulting in initiatives that never get off the ground. Still others may roll out unconnected projects using similar technology and duplicate their efforts unnecessarily. To avoid the latter case, InfoWorld's Andrew Oliver notes that deploying "Hadoop as a service" is becoming a common pattern: With sufficient preparation, various business units can obtain Hadoop analytics self-service style from a large, centralized, scalable hub.
While most IT decision-making is no longer top down, getting serious about security needs to come from the top. That's because security almost always has a negative effect on productivity -- adding more steps to go through -- and diverts technology resources toward fixing vulnerabilities and away from meeting business goals.
But as many enterprises have learned the hard way, you can focus on user experience all you like, but if a data breach exposes customers' personal information, your brand may never be trusted again.
Making security a high priority needs to come from the C-suite, because you can't break security bottlenecks without it. For example, unpatched systems are the number one vulnerability in almost all enterprises. It would seem relatively simple to establish a program to roll out patches as they arrive, at the very least for high-risk software such as Java, Flash, or Acrobat. But in most enterprises, systems remain unpatched because certain applications rely on older software versions.
You need to carry a big stick to convince a line of business manager to rewrite or replace an application because it's too much of a security risk.
In security, best practices -- such as prompt patching and up-to-date user training -- trump technology every time, but certain security technologies have more impact than others:
Multifactor authentication. Fingerprints, face scans, or sending codes via text message to a user's mobile phone all decrease the likelihood intruders can take over an endpoint and gain access to the network.
Network monitoring. First, get to know the normal network traffic flow. Then set up alerts when that flow deviates from the norm -- and you may catch data being exfiltrated from your network.
Encryption by default. Processing power has become so abundant that sensitive data can be encrypted at rest. The bad guys may be able to steal it, but they can't do anything with it.
Again, all these security measures spin cycles that could be applied to more competitive endeavors that please customers and drive revenue. That's why senior management needs to enforce their implementation and penalize those who fail to comply. It's a lot better than picking up the pieces after a horrific data breach.
While technology decision-making has become more decentralized, security isn't the only area where central control still plays an important role.
The great risk of decentralized IT is a balkanized organization. It's one thing to empower people at the line-of-business or developer level to choose the technology they need to serve customers best. But if they're creating their own data stores in the cloud without considering how that data needs to be consilidated, or building systems that are redundant with others, or signing disasterous contracts with providers...then the agility of decentralization descends into chaos.
The answer is an architectural framework that empowers people but at the same time prevents them from making bad decisions. You need to enforce best practices and promote pre-approved services and technologies -- while giving stakeholders the latitude to experiment and the processes to evaluate exciting new solutions as they come along. That governance needs the full force of senior management behind it.
Today, IT is more federated than centralized, and it needs to be that way to serve customers best. But the policies established at the heart of IT are more important than ever, because they're what holds that federation together.
No comments:
Post a Comment