Skip to content

Re-thinking: CIO Role in a 21st Century Corporation

May 22, 2010

There has been lot of discussions about the role of CIO in 21st century enterprises. While some pundits strongly argue and make their predictions on slowly diminishing role of CIOs – some strongly believe that CIOs should play a strategic role in shaping and preparing the enterprises for 21st century.

Peter Drucker wrote many books and papers about the role of information in building sustainable and competitive organizations. Even after a decade of such prediction, role of information is getting ever more important than before. Peter Drucker wrote,

“Increasingly, a winning strategy will require information about events and conditions outside the institution: non-customers, technologies other than those currently used by the company and its present competitors, markets not currently served, and so on. Only with this information can a business decide how to allocate its knowledge resources in order to produce the highest yield. Only with such information can a business also prepare for new changes and challenges arising from sudden shifts in the world economy and in the nature and content of knowledge itself. The development of rigorous methods for gathering and analyzing outside information will increasingly become a major challenge for businesses and for information experts.”

Technology is becoming integral part of the business. In 21st century, CIOs and their teams need to play a much broader and expanded business role sharing leadership of technology with business peers as well as acquiring responsibility for many of the firm’s shared services. CIO also need to find an inspired ways to recruit and retain the best and the brightest new talent who will be eager to solve problems and who speak the language of populist technologies as a first language. CIO who can’t see them playing this role should begin now to change the profile of themselves and their teams or see their role ever more marginalized.

One of the major issues I have seen last 10 years working in various management roles, technology teams lacks the necessary business knowledge and ability to map the great technology to business opportunities. To build a successful bridge between the technology and the business, CIOs have to play a significant role in boosting the business knowledge of their teams. It is time to transform a tech-oriented staff into one that has the requisite business skills, including negotiation, strategy, or financial analysis. These are very essential skills required to align the business and IT.

CIO’s need to play a bigger role in strategy execution rather than focusing more of their time on technology selection and operations. I have seen so much of friction between technology and operations. CIOs focus lot of their time and energy trying to align the technology and business. In my view, first thing that CIOs should report to CEO. CIO should play a bigger role in helping CEOs with necessary tools and frameworks to integrate critical information flows from the Information Systems to Business Systems and vice-versa. Also, CIO should focus more on forming networks of supply partners, tapping them for new ideas, engaging them to broker cross-industry lessons learned, and with them, establishing a responsive ecosystem of providers.

Paul Saffo summarized the state of machines, complexity of tools, and exploding information in his HBR article, “Are You Machine Wise?”

“As our tools become ever more complex and interconnected and more central to the conduct of business, their benefits also become harder to recognize. Furthermore, executives need to know and understand the logic of the work done by machines—and, above all else, the limits beyond which those tools cannot be pushed. Meanwhile, the volume of information continues to expand exponentially, generated by machines conversing with other machines on our behalf. Every business activity leaves behind a wake of information, from data spinning off production-line process controllers to transaction records generated over retail-credit-card networks. And the growing centrality of the Internet for business purposes will only add to the flood.”

Finally, CIOs need to think technology from the business perspective to help CEOs position the company for competitiveness — their firms’ differentiation of products, services, and business models. Technology itself is not a differentiator unless it align with the business. So, CIOs need to think differently – just technology perspective is not enough and they need to start creating value for the businesses. I think CIOs can play a very influential role in bridging the customers, suppliers, partners, and innovation channels by aligning technology from business perspective.

What do you think?

Advertisements

Cloudplay 2010: Panel Discussion on Cloud Computing: Opportunities and Challenges

April 23, 2010

I put together an expert panel to discuss opportunities and challenges in Cloud computing next week at Cloudplay 2010 at Plug and Play Tech center, Sunnvyale. Panelists include Vishal Sikka, Executive Director and CTO, SAP, Susie Wee, CTO, HP Client Cloud Services.

Business Context

Due to the global economic downturn, enterprises are seeking to rein in capex, drive down internal IT costs, consolidate their activities, and run as lean as possible over the next few years. Cloud is becoming a strategy and an approach for service providers, technology vendors, and consumers. Today’s business environment is becoming more dynamic and volatile than before driven by consumerization and globalization. With the introduction of powerful handsets, including smart phones, iPhones and iPads, subscribers are now using data services as never before and exponential growth in data traffic is forcing network carriers to increase their bandwidth through adoption of WiMax/LTE.

This opens the door to a world of unlimited choices to the subscribers as well as numerous opportunities to services providers to reach out and grab the attention capital from these consumers – as long as they can bring together the subscriber data and provide a contextually relevant consumer experience. There is no shortage of opportunities and new competitive challenges. Cloud Service Providers(CSP) are now foraying into content publishing, application stores and other complementary areas under the Cloud umbrella.

In this panel discussion, I intend to explore how companies are transforming their intellectual property and business processes into digital platform offered as a service to create new sources of revenue streams. Companies started to think out the building, not just out-side-the-box, for new innovations and growth opportunities. Recent Verizon and AT&T’s move into offering computing as a service underscores their desire to ride this new wave to monetize their investments into infrastructure and the convergence of communication, computing, and media. In fact, Carriers’ have been delivering hosting services for number of years and have proven quite popular. They are familiar with many of the data center requirements that come with delivery of cloud services. As network is the core of the cloud services, they have the infrastructure and expertise necessary to deliver cloud services to small and medium businesses.

Here are the some of questions we will discuss on this panel:


  • How Cloud providers can help enterprises turn capex into more predictable opex and become more agile, able to respond quickly to demand spikes and new business opportunities?

  • How management tools and processes can mitigate business and security risks, and reduce operational costs through automation?

  • If CSPs are capable of becoming competitive publishers, application storefronts, and value added service providers? What opportunities it opens for innovators and entrepreneurs?

  • What kind of partnerships and ecosystem needed to exploit these opportunities?

  • If Cloud service providers can really make money to keep these innovations more sustainable?

  • What are the major management requirements IT organizations ask Cloud providers for?

  • How are Cloud providers responding to meet management (technology and process) and customer needs?

  • What are the top inhibitors Cloud providers must overcome to drive customer demand?

That said, I am looking forward to seeing you and hearing your voice/questions. Let me know if you are not attending but would like to ask our panelists any questions related to cloud strategy, opportunities, and challenges. Leave a comment or send a tweet. I will compile the panel discussion summary and post it here.

Rethinking: Cloud & Enterprise Computing

November 22, 2009

Companies currently spend about 5-6% of their revenues on IT. Many of these companies are now struggling to align their IT to support the business strategy, provide a competitive advantage, and serve as a platform for growth. Exploding number of choices and growing complexity of technology assets making these companies victims of their rapidly obsolescing computing infrastructure. Once these assets offered these companies the competitive advantage and served as barriers to entry but now these IT assets are becoming liability. Supply chain meets the cloud to boost the visibility of collaboration processes with and between third-parties such as suppliers, partners, and customers. If companies fail to deconstruct their IT infrastructure and embrace cloud, somebody will do and make them irrelevant.

CORE AND CONTEXT

The bulk of the economic value of organizations is processed through business and consumer supply chains of products and services across manufacturing and services industries. No matter whether it is retail, healthcare, banking, real estate, manufacturing, insurance, communications or others, there are significant gaps in the point-to-point business processes across business’ operations resulting from underlying Infostructure complexity. These enterprises are trapped with their internal IT (Context) focus and are ignoring the importance of the information and interactions across their supply chains (Core). For many companies this has resulted in loss of profitability and in some cases the elimination of products and services all together.

Professor Hau Lee, well known expert in Supply Chain Management said,

“Companies are great not because they were focused on cost or flexibility or speed but because they have the ability to manage transitions – changing market conditions, evolving technology, different requirements as a product moves through its life cycle. Companies also need to be able to handle one more transition: Crisis Management. Successful companies have been able to grab market share out of crises, which often requires them to work effectively across functional boundaries”.

As the recessionary economic and business climate becomes more challenging for organizations, there are many competing priorities and fewer resources to maintain and manage existing operations. Still, with a once in a generation slowdown, it is also an opportune time to re-evaluate where automation and collaboration of these processes can make significant improvement now and in the upturn. It will not be sufficient to just be internally focused on your segment of the supply chain.

SERVICES IN CLOUD: ARCHITECTURAL TRANSFORMATION

Cloud computing is a new deployment and operational model for making IT management simpler and more responsive to the needs of the dynamic business. Cloud architecture decouples the IT infrastructure from the business services. Cloud computing not only enables rapid innovation, flexibility, and support of core business functions but also enables design, development and delivery of new applications by highly efficient virtualized compute resources that can be rapidly scaled up and down in a flexible yet secure way to deliver a high quality of service.

In the pre-information era, suppliers and manufacturers have market power because they have information about a product or a service that the customer does not and can not have. But, now customer has all the information. Whoever has the information has the power. Power is now shifting to the customer. This means that the supplier, manufacturer will soon cease to be a seller and instead become a buyer for a customer. This is already happening. Peter Drucker put it succinctly in his article, HBR Sep-Oct 1997, “Looking Ahead: Implications of the Present“:

“Increasingly, a winning strategy will require information about events and conditions outside the institution: non-customers, technologies other than those currently used by the company and its present competitors, markets not currently served, and so on. Only with this information can a business decide how to allocate its knowledge resources in order to produce the highest yield. Only with such information can a business also prepare for new changes and challenges arising from sudden shifts in the world economy and in the nature and content of knowledge itself. The development of rigorous methods for gathering and analyzing outside information will increasingly become a major challenge for businesses and for information experts.”

Technology is a critical part of supply chain management because companies need to bring together disparate strands of information to be able to understand and assess situations. They also must have analytical services to be able to quickly and consistently decide on the best course of action. A large number of the larger vendors offer some or all of the pieces needed to support more effective supply chain execution — supply chain management and ERP software for collecting data, data warehouses for staging data, and business intelligence software for creating and managing the reporting, scorecard, and dashboard elements. However, they may not be bringing all of the data together in a way that makes it useful, timely, and actionable. To do so, significant integration and customization are needed, which is very time consuming as well as expensive undertaking. Justifying the long development cycles and huge R&D budgets makes these projects not attractive to the business leaders.

Paul Saffo summarized the state of machines, complexity of tools, and exploding information in his HBR article, “Are You Machine Wise?(HBR, 1997)”

“As our tools become ever more complex and interconnected and more central to the conduct of business, their benefits also become harder to recognize. Furthermore, executives need to know and understand the logic of the work done by machines—and, above all else, the limits beyond which those tools cannot be pushed. Meanwhile, the volume of information continues to expand exponentially, generated by machines conversing with other machines on our behalf. Every business activity leaves behind a wake of information, from data spinning off production-line process controllers to transaction records generated over retail-credit-card networks. And the growing centrality of the Internet for business purposes will only add to the flood.”

CLOUDABILITY OF ERP/SCM SERVICES

It has taken good 10 years for companies to embrace enterprise resource planning and supply chain management. This is primarily due to high implementation and licensing costs of the software. In my view, the adoption of cloud computing services in a supply chain and enterprise resource planning many be faster than the former uptake patterns of on-premise enterprise resource planning software. More and more companies are already collaborating with their suppliers, vendors, and partners using the Internet or VANs. It doesn’t make any economic sense to own and operate their own internal data centers to run these applications.

In the same way that ERP/SCM applications have not been employed to automate 100% of enterprises’ business processes, organizations are likely to use a hybrid approach, public and private cloud services where appropriate. Initially, lower-level cloud-based services such as accessing compute power or storage capacity over the internet (infrastructure as a service) and exploiting platform as a service for use in tactical and emerging applications. Software as a service models will be embraced for standardized application areas such as finance, payroll, logistics, human resources (context) that do not provide organizations with competitive advantage.

These companies may also pursue the concept of “private” cloud computing to create their own “private cloud” datacenters. Individual business units (or partners) then pay the IT department for using industrialized or standardized services in line with agreed charge-back mechanisms. This approach is less threatening than a wholesale move to the public cloud, but should make it easier to plan the gradual migration to cloud services.

enumerated for enterprise adoption of Cloud Computing. Joe Wienman wrote,

“Cloud services are definitely of use for extranet communities…we are seeing it in a variety of areas in AT&T’s businesses. For example, AT&T’s Sterling Commerce unit is a “cloud provider” for supply chain visibility and optimization, and our AT&T Telepresence Solution provides benefits through extranet connectivity, where there is a network effect. And, with networking costs and transaction costs coming down, and enabling technologies such as RFID, sensor networks, electronic product codes, etc., supply chains will continue to benefit from neutral and authoritative cloud services, e.g., chain of custody for tagged pharmaceuticals. And, when two giants are part of the supply chain, e.g., a large retailer and a large consumer packaged goods manufacturer, where should the data reside? If it’s at the retailer, then the manufacturer can access it, but needs to build separate interfaces for other retailers, etc., so the order(n) vs. order(n squared) economics come into play, driving functionality into the cloud.”

CLOUD AND BUSINESS IMPLICATIONS

Economies of Scale: Cloud redefines economies of scale, allowing small companies to enjoy the low unit cost for scaling out their computing infrastructure – traditionally companies with huge data centers only been able to offer rich information to their customers.

Compressed Transaction Costs: Transaction costs along the supply chains are getting lower and they continue to decline sharply. Lower transaction costs are allowing companies to significantly enhance the richness of the information combined with interactivity(soon may be augmented realty), that would have been too costly to capture and process in absence of Cloud like models.

Your Success Depends on Quality of Decisions You Make:A real-time enterprise derives competitive advantage from responding to changing business conditions and opportunities faster than the competition. Often, decision-making depends on computing, e.g., business intelligence, risk analysis, portfolio optimization and so forth. Since an ideal cloud provides effectively unbounded on-demand scalability, for the same cost, a business can accelerate its decision-making. So far, few organizations have figured out how to turn the oceans of data available to them into islands of insight about their best opportunities for growth. Therein lies largely untapped potential for companies to accelerate their growth and separate from the competition (Cloudonomics Law #7).

Create and Stage Rich User Experiences:Using Cloud, enterprise can take advantage of Cloud to reduce the latency of critical business applications (Cloudonomics Law #8).

Availability and Reliability at Fractional Cost: The reliability of a system with n redundant components, each with reliability r, is 1-(1-r)^n. So if the reliability of a single data center is 99 percent, two data centers provide four nines (99.99 percent) and three data centers provide six nines (99.9999 percent). For enterprises to achieve this level of availabilit, it not only takes huge capital investment, but also drives their operational cost. Instead, enterprises can leverage Cloud to achieve extremely high reliability architecture with only a few data centers (Cloudonomics Law #9).

CLOUD AND BUSINESS CONSEQUENCES

Process Optimizations: Though Y2K provided an opportunity to replace/optimize the old transaction systems with more efficient models, many enterprises have been quick to replace them with standard software – primary goal was Y2K compatibility. Cloud provides a unique opportunity to optimize key enterprise services — business process management, end-to-end visibility of demand-supply patterns, business activity monitoring, business analytics and data warehouse.

Process Standardization: Globalization, supply chain management, and restructuring demand standardization of services with clear interfaces. Standardized services are critical for collaboration, co-ordination, and co-creation with business partners and alliances.

Shared Services: Many enterprises are today utilizing shared services like UPS/FedEx for transportations/Logistics, ADP for payroll processing. Cloud enables these enterprises to explore more opportunities for shared services enabling them to focus more on their core competencies.

Enterprise Messaging Services: Last one decade many standards for information exchange across enterprise applications have evolved like EDIFACT, cXML, Rosettanet etc. Cloud will take these building blocks to the next level by enabling the globally scalable and reliable messaging infrastructure relieving them from expensive VANs used by enterprises today. It makes sense for today’s VAN providers to provide similar services in the cloud at fractional cost.

Integration Services: Even after a decade of huge investments into Enterprise Application Integration services, still integration is the major barrier for enterprises to launch new services. My hope is that Cloud offers a platform to simplify the integration through standardization of service interfaces. Instead of investing into customization and support of these integration services, VAN service providers can offer these integration services, if still required to talk to legacy systems.

Communities of Co-Ops: Cloud enables greater number of cooperating services between the members of a business community (suppliers, partners, customers).

Data Warehouse: ERP, SCM, and CRM process measurement generates an unprecedented flood data. Enterprise value is buried in this data. Most of the enterprises can’t afford to have their own IT infrastructure to make meaning out of this data. Cloud enables enterprises to burst into Data Warehousing services to enrich and contextualize this data.

CONCLUSION

Economic downturn and globalization is changing the way enterprises operate. Changes are becoming increasingly more radical. Enterprises are being broken down into components and reassembled along different lines. The feeling of uncertainty has never been great as it is now. Cloud computing going to play a critical role in simplifying the operations of Supply Chain networks and communities by taking advantage of cost structures and cloudonomics offered by the Cloud. Cloud makes new business solutions possible. This might means new or improved products and services, additional sales channels, more optimal means of procurement, new ways of customer, supplier collaboration, more effective management, and new information services.

ACKNOWLEDGEMENTS

I had the good fortune to be in a good place at the right time and to learn from others who willingly shared their experiences. I am most grateful to the many people who have offered me a helping hand, encouragement, and inspiration along the way. I would also like to acknowledge the years of wisdom many of you has shared with me on cloud computing, issues, benefits, and challenges. My sincere appreciation goes to Joe Weinman for his helpful insight and perspectives on Cloudonomics and Supply Chain. He has generously allowed me to use his ideas and spared his valuable time to review this post and provided me his valuable feedback. I have incorporated number of his Cloudonomics laws and some of our email conversations into this article.

Views and Experiences: Growth by Innovation or M&A?

October 25, 2009

John Furrier kicked-off an interesting conversational thread on twitter: Grow through innovation or M&A interesting discussion going on here; re: Cisco vs Juniper – two different visions on growth. Jake Kaldenbaugh, Susie Wee joined me in the discussion. I have summarized my thinking and position on M&A vs Innovation driven growth strategy. My argument is that success in Innovation and M&A driven strategies combined with optimal use of financial engineering is the best guarantee of thriving in good times and bad times. Companies decided to choose only innovation or M&A or financial engineering tend to destroy its value as opposed to creating value. Welcome your comments and wisdom to shape my thinking and practice.

Growth

Growth is a worthy goal. Not very long ago, earnings just more than the cost of the capital was the mantra to measure the growth of the companies. Many companies worked hard to make their assets sweat. But, changing times and changing market conditions drove investor’s thirst for ever higher returns. Then came the next generation technology companies and changed the market expectations. Now, same investors no longer happy with earnings at cost of capital. They are demanding unattainable growth targets. It is practically impossible to meet the rising expectations of shareholders unless companies innovate to create the wealth – that too in a way competition can’t follow you quickly. You can’t buy the innovation “off-the-shelf” through M&As.

In the continuing quest for business growth, many companies are turning to three compelling sources of growth: Financial Engineering, Innovation and Integration (M&A). Growth through applying financial engineering practices by getting rid of bad companies, “toxic assets”, share buy backs, returning cash to shareholders, and downsizing only lasts for short term and are used for instant cure for slow growth syndrome. Where Innovation is not only about developing new generations of products, services, channels, and customer experience but also conceiving new business processes and models. M&A or Integration enables them increase capacity, improve performance, lower cost structure, and discover new business opportunities.

Susie Wee (@susiewee) wrote,

Organic growth reqs commitment to productization/M&A reqs commitment 2 integration. Each has a place & needs to be “done well”

Growth is no substitute for radical innovation. But “durable growth” is a derivative of radical innovation. Focusing on growth rather than on the challenge of innovation is more likely to destroy wealth. Don’t mistake financial engineering for radical innovation. At the end there will not be any more wealth to unlock. You don’t need leaders to unlock the wealth – instead you need leaders who has fire-in-the-belly to create lightening rods of radical innovation. Innovation and integration, together, allow an enterprise to acquire more customers and deliver more goods and services to market. My argument is that success in all three is the best guarantee of thriving in good times and bad times. If companies choose to go with either one of these strategies and stagnation can doom a business. Successful execution of either of these strategies is not an easy task. It needs discipline and senior management commitment. It strongly depends on companies’ ability to collaborate across organizational boundaries.

Companies that can grow their top-line by giving away value at close to zero profit are spinning their wheels without much of progress. How long they can survive?

Innovation Driven Strategy

Unless companies institutionalize the innovation activism, they are unlikely to meet the challenges: Reinvent itself and re-inventing its industry. Apple is such a great example of an innovation driven company. How many acquisitions Apple made recently? I guess close to none. Is apple creating wealth or unlocking wealth? Apple created huge wealth through radical innovation. My assessment is that Apple was able to re-imagine its deepest sense of what Apple is, what it does, and how it competes. That made them very unique company in the Valley. Customers love their products. They don’t even mind piercing their bodies with tattoos of Apple’s logo. What is driving such a stellar performance of Apple? I bet it is constant, restless, and relentless innovation with new products their customers love without damaging their price position and the brand.

Susie Wee (@susiewee) wrote,

@Furrier @Jakewk @sureddy Driving innovation to product in a company is an art. Requires strategy, opportunity, timing, and luck.

@sureddy Actually I prefer the word “fortune” over “luck”. You can have some impact on your fortune.How important?Very… @Furrier @Jakewk

@sureddy Moving innov to market in BigCo requires many functions-mktg,r&d,GTM,supply chain,ops-to execute in concert. @Furrier @Jakewk

@sureddy Fortune/Luck is when the innovation matches needs/strategic directions in multiple functions at the same time. @Furrier @Jakewk

Many companies fail to create the future not because they fail to predict it but because they fail to imagine it. Companies stuck more often with their heritage and failed to distinguish between the heritage and destiny. As Jake Kaldenbaugh pointed out, in many companies the premium placed on being right is so high that there is virtually no room for imagination or looking for the unconventional wisdom. These days strategies of many companies looks alike or hardly any difference. Outsourcing was another powerful force for the strategy convergence. As companies outsourced more and more made their strategic differentiation narrower and narrower. They are focused for short term and compromising their long term vision.

Jake Kaldenbaugh tweeted his interesting view on Innovation Strategy:

@sureddy @susiewee @furrier Organic is still a funky art in cos. Have hard time doing both product & GTM development.

@sureddy @furrier Successful internal R&D growth strats tend to run in cycles for cos which tells me they may be based on leader ie S. Jobs

@sureddy @furrier @susiewee I think corp incentive systems interfere w/ internal innovation. Y so much reliance on VC & M&A in SV.

Without a radical innovation, companies are devoting a mountain of resources to molehill of differentiation. Unless companies become more adept at innovation, more imaginative minds will capture tomorrow’s wealth. We see this all the time. Companies started in garages tend to break the records of all time. What is driving these small companies to achieve such an outstanding performance or results? Their ability to de-construct and re-construct their business models at the speed of light. When its most effective, radical innovation makes their competition irrelevant. It isn’t about competitive strategy. It is not positioning against competition. But making competition irrelevant. If it is not different, it is not strategic any more.

Cisco and M&A Driven Strategy

Companies like Cisco realized that much of the innovation that will shape the future of their industry is occurring outside of their organization. Cisco viewed hundreds of startups that created every year as potential sources of innovation to be exploited. They adopted totally different strategy. They co-opted the insurgents. They are innovators in one sense realizing that the ultimate value of their acquisitions is not the integration of technology or products but unlocking the huge wealth in the contribution it makes to Cisco’s entrepreneurial energy. In my view, Cisco enabled different level of Innovation through a perfect blend of serendipity, genius, invest commitment, and sheer execution focus. I tend argue that Cisco legitimized, fostered, celebrated, and rewarded the nonlinear innovation. I may be wrong. But, my strong sense is that they created an ecosystem where virtuous mice(startups with great ideas) and wealthy elephant(Cisco with ton of money and passion to harvest these startup ideas) lived happily ever after!

@Jakewk Cisco’s competitive strategy is at odd with engineering and product strategy has been for years – they fill holes not advance tec

Cisco has worked to make acquisitions a routine process, as route as the product development. Kind of open their eyes and ears to look for new innovations part of their strategy. This strategy worked well for Cisco for reasonable amount of time.

Jake Kaldenbaugh replied to Cisco’s M&A vs Innovation strategy:

@Furrier Re: Cisco filling holes v. adv tech: Sometimes systemic integration can be innovation IMHO.

Does this strategy work for long run? Is it sustainable? In my view, I think they also reached the point of inflection. For every strategy there is a decay unless it is constantly refined and reinforced. And I am sure Cisco started to experience this too. It is rather very difficult to scale this strategy for ever. The complexity of trying to manage these different business units soon will overwhelm the advantage of integration. Though M&A helps eliminating the competition vs making their competition irrelevant. Companies that focus on this strategy, soon end up with an unsustainable or strategy decay. What do they do next? Then they go and spin-off or de-mergers or break them into small companies. Or get rid of the bad apples. All these would lead to the circus of financial engineering to unlock the shareholder wealth. Never forget that good companies gone bad are simply companies that for too long denied the strategy decay or trying to over reach their growth without any strategic differentiation.

Conclusion

Don’t mistake financial engineering for radical innovation. At the end there will not be any more wealth to unlock. Companies need radical innovation to create more wealth. Successful innovation needs discipline of innovation, senior management commitment, and depends on companies’ ability to collaborate across organizational boundaries. Innovation and integration, together, allow an enterprise to acquire more customers and deliver more goods and services to market more rapidly making competition irrelevant. My argument is that success in both combined with optimal use of financial engineering is the best guarantee of thriving in good times and bad times. Companies decided to choose only innovation or M&A or financial engineering tend to destroy its value as opposed to creating value.

Cloud: Interoperability & Portability

October 12, 2009

The discussion about the difference between interoperability and portability isn’t new by any means. In the context of Cloud, Portability is the ability to move an application or service from one cloud to another cloud, usually with minimal overhead, or no overhead. Interoperability is the ability of services to seamlessly communicate with each other.

If Mime is a portable format for exchanging the mail attachments in consistent and decipherable format, then the SMTP is an interoperable communication mechanism to transport these messages from one place to the other. Similarly, SNMP is the transport and MIBs are the message codification scheme for portable understanding of these messages. That’s good. We solved this problem number of times. We learned lot from these evolutions. So, combining all these rich experiences and wisdom, I am sure we all can come up with simple but powerful mechanisms for enabling the another level of technological disruptions and innovations.

Christofer Hoff wrote on his incomplete thought on “Cloud Portability or Interoperability?”:

There is a lot effort being spent now on attempts to craft standards and definitions in order to provide interfaces which allow discrete cloud elements and providers to interoperate. Should we not first focus our efforts on ensuring portability between clouds of our atomic instances (however you wish to define them) and the metastructure that enables them

Cloud services are composed from connecting one or more services and combinations of message patterns that takes places between or among these services. This implies that Interoperability (dealing with how to communicate among these services) and Portability (how to move these services and associated data sets from one cloud service provider to the other) are more critical than ever before for Cloud.
Lori MacVittie argued that:

I don’t think there is a difference between portability and interoperability. If you have one, you have the other. We can certainly move forward on an attempt to define a standard that allows portability across environments of atomic components as long as we do so in a way that bears in mind we’ll need to extend it to support metastructure in the future.

Allen Baranov has a different point of view:

In other words… they(portability and interoperability) need to come at the same time.

I do agree with @Allen that the Portability and Interoperability are two inseparable and conjugate concepts that need to go hand in hand. Though how that can be done is different issue. But, both of them are necessary and required conditions to give the customers required confidence that they can move their services and data freely. I am sure @Lori is also meant that Interoperability and Portability are not two separate things and they need to go together. Good news is that as network speeds approach computer bus speeds, the network becomes the computer, Portability starts embracing Interoperability issues and Interoperability can start gleaning the benefits of Portability. So, the distinction or difference between these two started to blur and portability meets interoperability.
Dan Philpott has expressed his interesting view and concern on innovation and standardization:

Building in a requirement for portability at the outset would tend to retard development of new technologies. If a technology is portable it becomes a commodity. Commodities mean you have no market incentive to beat the competition on anything but price as all products are otherwise equal. Companies who innovate want to lock in a larger market share by producing something unique and market differentiated. So building in portability means that they would not be rewarded for the innovation.

Yes, Agreed. But, that is true for specialization. I am not sure if APIs or protocols are their sustainable competitive advantage. The interoperability that results from using standards makes it easier for consumers to mix and match products and it increases competition. In case of Cloud, standards clearly needed because we’re talking about some kind of platform on which other applications and services are going to be built. In my view, the biggest economic contribution will in fact come from the platform or the applications on top of this standardized platform. If you are building a specialized service on top of this platform, then competition make sense. Though the competition is definitely a key component in driving innovation, but it’s important to question where that competition should be occurring, and where it’s mutually beneficial to have a standard. Internet shouldn’t have been successful without TCP/IP or portable data formats. XML, MIME, EDI are all standards but innovation thrived around these standards. So, i strongly argue that vendors/cloud service providers need to be more innovative than locking up access protocols or methods.

Finally, we need to make sure we clearly define what we mean by interoperability and portability and try not to gloss over the differences. Interoperability is extremely important as far as Cloud services are concerned.

Then Rich Miller asked three challenging questions to help complete Hoff’s incomplete thought:

  • Within the context of infrastructure as a service, what does an interoperability means? Does it mean anything other than I can package up a workload on one of the IaaS environment and reinstate it on the other side? Doesn’t that sound like portability?
  • Within the context of PaaS, what does interoperability means? Does it mean that I can do a database “merge” operation between collections residing on the two services without an export and import? Have we just reinvented federated database operation? Or does it mean successful export-import aka data portability?
  • In Cloud environment, what’s the difference between interoperability and portability exactly? What cloud go to do with it?

@Rich, you answered your own questions. Interoperability and Portability are not a new topics for any of us. We have dealt with these issues for ORBs, SQL, EDI etc. Posix has been around for awhile giving us mechanism to provide interoperable and portable access to systems all along. In my view, we all need to seriously start thinking about collaboration and standardization to address these portability and interoperability issues. I don’t think, protecting APIs or access methods gives any one vendor a sustainable competitive advantage. Any comments?

What do we need to standardize?

It would be very difficult to fully anticipate the needs of the Cloud service consumers. There is a growing need to distribute the application/service globally to be able to meet the demands of growing business needs. When services are distributed or deployed across clouds, latencies and performance guarantees of each other is critical. Ability to switch over to the other service providers who can fulfill these goal is also equally important. User applications or services should be able to balance their requirements like cost, geography, throughput and other efficiencies. As Cloud is all about dynamicity, it is essential to provide a common interface to negotiate, allocate or de-allocate any additional cloud services or resources completely driven by the business needs. All these lead to having a common interfaces or standards to facilitate addressing some of these challenges:

Some of the key standards required for the futuristic cloud services (metastructure) are:

  • Cloud Resources:Cloud service provider independent mechanism to access metastructure (including the common semantics for cloud resources like Nodes, Load Balancers, Switches, Routers, Firewalls, Network ACLS, Data Access(both structured and unstructured etc)
  • Cloud Services Directory:Cloud service directory services for service configuration, identification, location, and routes etc. Same interfaces or services should be accessible from other cloud service providers too.
  • Audit, Assurance, and Compliance Data:Given the growing policy and compliance needs, Cloud service consumers needs some common mechanisms to extract this information from the underlying cloud resources or services.
  • Accounting and Metering: Cost of resources is a very important factor for any application. Of Course, primary goal of every business is to create value for their shareholders. Traditionally, IT departments have been operating with huge capital expenditure budgets (depreciation curse) or operating leases (off-balance sheet magic). Cloud introduces pay-as-you-go model making it very difficult to predict the cost of these services during the budgeting process. IT leaders need to figure out how these services are budgeted. However, what is very critical in that direction is having an uniform interface and/or semantics for metering and monitoring resources consumed in the Cloud. In addition, these mechanisms also help them to put some governance and financial controls.
  • Resource Life Cycle Management: Cloud moves the resource ownership to centralized service providers. As consumers started to use Cloud resources, they need better control on negotiating, acquiring, pricing and activation of these resources. So, it is very important to define common mechanism and interfaces to address these needs in the cloud with some common interfaces to negotiate, execute and monitor these contracts or commitments.
  • Cloud Security Services: Security is becoming increasingly important concern in the cloud. It is not that current applications addressed this problem very well and cloud is not thinking about. Enterprise applications are hosted inside the bricked walls to protect themselves. That gives the users the confidence and assurance required. The moment they move these applications into the cloud, onus will be on the applications themselves protect from any security breaches. Applications need to sense and respond to any threats. It starts with defining the common interface to provide the security protocols required. Again, how this mechanism implemented are left to the Cloud infrastructure providers to innovate and come up with new technologies and innovations. Some cases, we should be able to leverage all TLS, HTTP/S, IPSEC and other technologies innovations already in place. At the minimum, it is important to think of how cloud applications and security aspects needs to be provisioned, monitored, and controlled. So, this leads to my requirement for defining protocol or common mechanisms for provisioning security identities in the cloud.
  • Cloud Performance Data: Performance monitoring and tuning is going to be another Interesting challenge that InterCloud need to address to provide an ability to predict an application’s performance across different clouds. Unless the Clouds can provide some signals of performance(like what we have today for OSs), it would be hard for service brokers to negotiate these contracts dynamically. To facilitate the service bursting into Cloud having a well defined interfaces to the Cloud resource is very essential. Though initial phases of cloud adoption, application can take care of these initially(that may open some more new challenges though) but we should start thinking at higher level i.e. Cloud infrastructure level.

Cloud – Enabling Rich User Experiences

October 10, 2009

Most successful brands create breakthrough ideas or innovations that are inspired by a deep understanding of consumers’ lives. Customers are no longer tolerating the rushed and mediocre service offerings. Instead, they are demanding satisfying and rich experiences. Companies that provide it will evoke emotional bonding with the brand and win their loyalty.

It’s fashionable today to talk of becoming “customer oriented.” Customer centricity is not just a slogan. It’s a pre-requisite for substantial profitable growth. Customer driven innovation isn’t just a strategy. It is a rigorous process and helps companies to understand who their customers are and what they care about. Customer centric thinking focuses on developing better ways of communicating value propositions and delivering complete, satisfying experiences to customers. It takes more than good intentions or grand visions to innovate in a customer-centric way. With the emergence of information rich societies and wide range of options for interactivity, customers are demanding more than ever before. It is going to be more challenging to keep your customers engaged with your brand unless you put them into your innovation process.

Padmasree Warrior, Cisco CTO, wrote on her blog:

Much of the discussion regarding apps today focuses on the debate between pure cloud (SaaS) delivery vs. the traditional on-premise approach to apps. Our view is that we need to move beyond this conversation and focus on the user experience. Here’s what we mean by that: As users we all want an experience that’s consistent and seamless, with the ability to stay connected and have instant access to the services and functionality we need, regardless of our location or what device we happen to be using. To deliver that seamless experience we’re going to need a combination of different types of applications—some that are on-premise and others that are on-demand.

Yes, we have been spending too much time debating on SaaS vs on-premise hosting of our applications. Instead, our focus should have been on creating rich and memorable user experiences. A disciplined process of Customer innovation will turn customer wishes into an enduring competitive edge – and a growing marketing cap. How do we enable this? How flexible and agile our systems and processes are to drive this level of interactivity with our customers. Limitations and rigidity in our systems and services are being pushed onto customers as “best practices” and cost of customization of these “rigid” systems far exceeds the value it offers. So, we need a radical shift in our thinking. We need to bring customer into the innovation process. This is only possible by shifting our conversation from systems view to customer view.

However, to fulfill this ambitious goal, we need agile, stable, and scalable service delivery platform. In spite of the all trends and developments in the technology, like SOA and Web 2.0 serving, we are still mired with IT infrastructure complexities and deeply fire-walled applications. So, the next frontier of innovation will require the customer focused, lean and optimized, utility based, and demand driven (CLOUD) computing infrastructure.

Though some argue that Cloud is the new business model or outsourced IT model, my view is that it is both an architectural paradigm shift and an economic model enabling optimal pricing and rapid innovation of new services without a huge capital outlays. Cloud is an architectural paradigm shift because we need to think differently the way we build, deploy and manage services in the Cloud. With the Cloud, we can focus on innovating to fulfill this new user centric view instead of spending all our time and resources to keep the lights on. Current applications were designed with different assumptions. Designers and developers glued their applications tightly to an operating environment and network. Hard-wired whole bunch of localized configurations into their applications. They fused-in specialized ACLs into network switches. Built rings of firewalls and VLANs of hell around their applications. May holes were punched and many controls were enforced around these applications. Moving these applications into Cloud is a huge undertaking.

Last 3 years, I have studied number of applications including massively complex Supply Chain Management processes to stateless web serving applications. Moving them into Cloud involves either complete re-write or re-engineering of data extractions, transformation, and loading in addition to re-wiring their business processes. Many of these applications assumed local optimizations, caching, connection pooling. It is even shockingly surprising that many application secrets were buried and firewalled on those servers. Moving them off the localized fire-walled environments to Cloud needs architectural re-thinking.

Though many enterprises are curious to move to the Cloud, my view is that they are not ready to embrace Cloud unless they look at their architecture and infrastructure more holistically. Virtualization is necessary but not sufficient. Extreme automation is the key. Today 76% of the production outages are caused by errors in configuration or change management. So, Continuation Integration combined with an automated deployment should be integrated into the services. Cloud is a promise. Service is the fulfillment. End-to-End Service is what it matters to consumers/customers.

With that said, majority of Cloud (Public Cloud) adoptions will be driven by emerging companies, services, and consumer facing web companies. Meanwhile, enterprises will start to adopt the private cloud model for their enterprise applications. That will give them fairly good opportunity to look at their applications, networking, security, and integration infrastructure more holistically. Once these applications are rewired into services with infrastructure 2.0 thinking, then they can burst their capacity needs into the public clouds. I see this as a multi-year journey.

Virtualization: Solution or Problem?

September 27, 2009

Is virtualization solution to a problem or part of the problem?

Christofer Hoff ignited the create spark Virtual Machines are the Problem, Not the Solution.

In my view and experience, Virtualization is part of the problem as well as part of the solution. While automation is the key in fulfilling end-to-end service delivery, virtualization is a necessary technology. However, current architectural style of service composition, delivery, and management is mired with problems, workarounds, and band-aids which makes the SLA driven end-to-end service delivery just a promise not the fulfillment. We should stop dishing out nodes to the development. Should stop pushing ACLs into switches. We should stop accessing OS primitives from applications. We should stop writing communication patterns into applications. A well defined abstraction and framework on top of Virtualization is essential to make this happen. We can’t ignore the change, configuration, and security management. Simply put it, push-button delivery of services into Cloud securely, reliably, and rapidly. As Christofer Hoff (@beaker) suggested on his blog Rational Survivability, JEOS is first step in that direction.

Let me share my view on why virtualization is part of problem first and then explain why it is also important for End-to-End service delivery.

Why it is Part Problem?


“Geometric complexity” of systems is a (if not the) major contributor to the costs and stability issues we face in our production environments today. In this context, complexity is introduced by the heterogeneity and variations of “OS” needs per application and underlying components (like databases, network, and security etc). These unmanageable or incomprehensible numbers of variations of the Operating Environment makes it hard to understand and optimize our compute infrastructure. We continue to invest our scarce resources to keep this junk alive and fresh all the time. More importantly, 70% of service outages today is caused by configuration or patching errors.
Christofer Hoff (@beaker) puts it very well,

“there’s a bloated, parasitic resource-gobbling cancer inside every VM”.

I was hopeful and optimistic that would change the way applications designed and delivered. Rich application frameworks like J2EE, Spring, Ruby etc evolved but Operating Environment evolved into one big, monolithic, generalized OS making it impossible to track what is needed and what is not. Adding to this brew, mind boggling number of open sources libraries and tools crept into OS. Though Virtualization provided an opportunity to help us correct these sins but in the disguise of virtualization we started to commit more sins. Sadly, instead of wiping out the cancer bits in the operating environment, all the junk packaged into VMs.

Christofer Hoff (@beaker) raised very thought provoking and stimulating question:

“if we didn’t have resource-inefficient operating systems, handicapped applications that were incestuously hooked to them, and tons of legacy networking stuff to deal with that unholy affinity, imagine the fun we could have. Imagine how and flexible we could become”.

This is very true. We have too much of baggage and junk inside our operating environment. That has to change. It is not the question of VMWARE, XEN, Parallels or Linux, Open Solaris or FreeBSD. We need paradigm shift in the way we architect and deliver “services”.

Sam Johnston (@samj ) pointed out,

“ I agree completely that the OS is like a cancer that sucks energy(e.g., resources, cycles), needs constant treatment(e.g. patches, updates, upgrades) and poses significant risk of death(e.g. catastrophic failure) to any application it hosts”. Yes, Sam is correct in his characterization or assertion of “Malignant OS”.

Now turn our chapter to why virtualization is important

@JSchroedl @AndiMann @sureddy Sounds like we’re all in virtual agreement: Not just virtual servers, or even virtual systems, but “Services” end-to-end.

End to End Service Delivery: My sense of virtualization is that it provides an abstraction to absorb all low-level variations, exposing a much simpler, homogeneous environment. While this is not sufficient to help us deliver the automation needed for End to End Service delivery, it is a necessary technology. Applications/Services won’t be exposed to the variations in our operating environment; instead, they will be exposed to a service runtime platform (call it “container” for lack of a better word) with uniform behavioral characteristics and interfaces (please note that “container” is not VM, it is much higher level abstraction that orchestrates hypervisors and operating environments isolating all intricacies of virtualization and operations management etc). We won’t need to qualify an innumerable combination of hardware, OS’s, and software stacks. Instead, the Container layer will be the point of qualification on both sides: each new variation of hardware will be qualified against a single Container layer, and all software will be qualified (quite literally, providing a fast lane change mechanisms development, test, staging and production (Continuous Integration & Continuous Deployment) against that same Container layer. This is really big deal. It helps us to innovate and roll out new services much faster than before. Virtualization plays important role in fulfilling the end-to-end service delivery.
Christofer Hoff(@beaker) pointed out,

“VMs have allowed us to take the first steps towards defining, compartmentalizing, and isolating some pretty nasty problems anchored on the sins of our fathers, but they don’t do a damned thing to fix them. VMs have certainly allowed us to(literally) think out-side the box about how we characterize workloads and have enabled us to begin talking about how we make them somewhat mobile, portable, interoperable, easy to describe, inventory, and in some cases more secure. Cool.”

Configurastions vs. Customizations: Virtualization also absorbs variations in the configurations of physical machines. With virtualization, applications can be written around their own, long-lasting “sweet spots” of services configurations that are synthesized and maintained at the container.

Homogeneity: The homogeneity afforded by virtualization extends to the entire software-development lifecycle. By using a uniform, virtualized serving infrastructure throughout the entire process, from development, through QA, all the way to deployment, we can significantly accelerate innovation and eliminate complexities, and reduce or eliminate incidences that inevitably arise from when the dev and QA environments differ from production.

Mobility: Software mobility to easily move software from one machine to another will greatly relax our SLAs for break-fix (because the software from a broken node can automatically be brought up on a working node), and that in turn reduces the need to physically move machines (because we can move the software instead of moving the machines).

Security Forensics: When an app host is to be decommissioned, virtualization presents the opportunity to archive the state of the host for security forensics, and to securely wipe the data from the decommissioned host using a simple, secure file-wipe rather than a specialized, hard-to-verify bootstrap process. In sum, VMMs provide a uniform, reliable, and performant API from which we can drive automation of the entire host life cycle.

Horizontal Scalability: Virtualization drives another very interesting and compelling architectural paradigm shift. In the world of SOA and global serving with unpredictable workload, we are better off running service tier(my view of tier is load balanced cluster of elastic nodes) across a larger number of smaller nodes, versus a smaller number of larger nodes. Large number of smaller nodes provides cost as well as horizontal scalability advantages. In addition, with a larger number of smaller nodes, when a node goes out, the remaining nodes can more easily absorb the spike in workload that results and new nodes can added or removed in response to workloads.

Eliminate Complex Parallelism: My experience with multi-processing systems(SMP) has shown that effectively scaling software beyond a few cores requires specialized design and programming skills to avoid contention and other bottlenecks to parallelism. Throwing more cores at our software does not improve performance. It is hard to build these specialized skills to develop well-tuned SMP and indeed becoming a great inhibitor to innovation in building scalable services. By slicing large physical servers into smaller, virtual machines we can deliver more value from our investment.

Cloud and Virtualization

@JSchroedl: PRT @AndiMann: HV = no more than hammers PRT @sureddy: Virt servers don’t matter.Cloud is a promise “Service” is what counts

Cloud is a promise and Service is the fulfillment. The goal of the cloud is to introduce an orders-of-magnitude increase in the amount of automation in IT environment, and to leverage that automation to introduce an orders-of-magnitude reduction in our time-to-respond. If a machine goes down (I should stop referring to machines any more – instead I should start emphasizing SLAs), automatically move its workload to a replacement—within seconds. If load on a service spikes or SLAs deviate from the expected mean, auto-magically increase the capacity of that service—again, within seconds.

Hypervisors (virtualization) are as necessary as hammers but not sufficient. What is needed is “End-to-End Service delivery. There is no doubt in my mind that IT is strategic to the business and if properly aligned with business goals, IT can indeed create huge value. Automation and End-to-End service delivery are key drivers for transforming current IT to more agile and responsive IT.

Physical machines do not provide this level of automation. Neither the bloated VMs containing the cancerous OS images. What we need a clean separation of Base Operating system (uniform across cloud), Platform specific components/bundles, and then application components/configurations. While it is impossible to rip and replace existing IT infrastructure, this layered approach would help us to gradually move toward more agile service delivery environment.