The decoupling of a users physical machine from the desktop and software

Desktop Virtualization Journal

Subscribe to Desktop Virtualization Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Desktop Virtualization Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Desktop Virtualization Authors: Bruce Popky, Vinod Mohan, Peter Silva, Jayaram Krishnaswamy, Bob Gourley

Related Topics: Cloud Computing, Virtualization Magazine, Desktop Virtualization Journal, Open Source Journal

Article

Data Virtualization – The Quest for Agility

In the quest for agility, let’s not sweep productivity under the carpet

Yes, data virtualization is definitely about agility. We covered this from different angles in my previous article. Agility in this context is about accelerating the time-to-delivery of new critical data and reports that the business needs and trusts. It is also about an architecture that is flexible to changes in underlying data sources, without impacting the consuming applications. It's about doing data virtualization the right way so you can deliver new critical data and reports in days vs. months.

However, what about productivity? A discussion around agility cannot and must not exclude productivity. Productivity thus goes hand-in-hand with agility. Whether it is about the rate at which goods are produced or work is completed, or a measure of the efficiency of production, it's about doing something well and effectively. This begs a deeper conversation around productivity, both as a capability as well as a benefit. However, looks like somebody forgot to talk about this - or, did they?

According to the 2011 TDWI BI Benchmark Report - "By a sizable margin, development and testing is the predominant activity for BI/DW teams, showing that initiatives to build and expand BI systems are under way across the industry. Nearly 53 percent of a team's time is spent on development and testing, followed by maintenance and change management at 26 percent." If a majority of time is spent in development and testing, and if BI agility is the end goal, isn't increased productivity absolutely critical?

I think Heraclitus of Ephesus was talking of productivity in the context of data virtualization when he said, "a hidden connection is stronger than an obvious one." There is definitely a hidden connection here. However, productivity often gets brushed under the carpet. Not because it's not critical, but because data virtualization built-on data federation has its roots in SQL or XQuery. As we know, with manual coding, limited reuse, and unnecessary work, productivity simply goes out of the window.

Let's see how and where productivity fits in a data virtualization project. Such a project involves a discrete set of steps, starting from defining the data model to deploying the optimized solution. Like a piece of art, each step can be approached differently - either painfully, or by applying best practices that support productivity. Here's the typical life cycle of a data virtualization project, as prescribed by industry architects. By questioning how each step is undertaken, we can understand the impact on productivity:

  1. Model - define and represent back-end data sources as common business entities
  2. Access and Merge - federate data in real-time across several heterogeneous data sources
  3. Profile - analyze and identify issues in the federated data
  4. Transform - apply advanced transformations including data quality to federated data
  5. Reuse - rapidly and seamlessly repurpose the same virtual view for any application
  6. Move or Federate - reuse the virtual view for batch use cases
  7. Scale and Perform - leverage optimizations, caching and patterns (e.g., replication, ETL, etc.)

Let's start with model. The questions to ask here are - is there an existing data model? If yes, can you easily import the model? This means, you must be able to speak to and suck in a model from the numerous modeling tools in existence. If not, if you need to start from scratch, you must be able to jump-start the creation of a logical data model with a metadata-driven, graphical approach. Hopefully, this should be a business user friendly experience, as the business user knows the data the best. Correct?

Next, let's consider access and merge. Yes, it's the domain of data federation. It's about making many data sources look like one. However, in my discussion about why a one-trick pony won't cut it, I mentioned a recent blog by Forrester Research, Inc. It states that traditional BI approaches often fall short because BI hasn't fully empowered information workers, who still largely depend on IT. Let's ask the question - can business users also access and merge diverse data, directly, without help from IT?

After you federate data from several heterogeneous data sources, it's all over right? Wrong! That's where data virtualization built on data federation ends. To do data virtualization the right way, the show must go on, logically speaking. The same business user (not IT) must now be able to analyze not just the data sources, but also the integration logic - be it the source, the inline transformations, or the virtual target. And this is profiling of federated data - which must mean no staging and no further processing.

What follows next must be the logical progression of what needs to happen once the business user uncovers inconsistencies and inaccuracies. Yes, we need to apply advanced transformation logic that includes data quality and data masking, on the federated data, as it is in flight. This is where you must ask if there are pre-built libraries available that you can easily leverage in your canvas. I take it that we all realize that this means not having to manually code such logic, which will obviously limit any reuse.

You have defined a data model, federated data, profiled it in real-time and applied advanced transformations on the fly - all without IT. Yes, get some technical help to develop new transformations, if required. But look to do this graphically, in a metadata-driven way, where business and IT users collaborate instantly with role-based tools. Ask if you can take a virtual view and reuse it with just a few clicks. Check to see if could work with a few reusable objects instead of thousands of lines of SQL code.

We have discussed the steps involved in virtual data integration. Yes there is no physical data movement, and yes, it's magical. However, ask what you can do if you need to persist data for compliance reasons. Or, ask for the flexibility to also process by batch, when data volumes go beyond what an optimized and scalable data virtualization solution can handle. Don't just add to the problem with yet another tool. Ask if you can toggle between virtual and physical modes with the same solution.

Finally, as you prepare to deploy the virtual view, a fair question must be about performance. After all, we are talking about operating in a virtual mode. Ask about optimizations and get the skinny on caching. Try to go deep and find out how the engine has been built. Is it based on a proven, mature and scalable data integration platform or one that only does data federation with all its limitations? Also, don't forget to ask about if the same solution leverages change data capture and replication patterns.

If the solution can support the entire data virtualization life cycle discussed above, you are probably within reach of utopia - productivity and agility. The trick however, is to ask the tough questions - as each step not only shaves off many weeks from the process but also helps users become more efficient. Ah, this is beginning to sound like cutting the wait and waste in a process, as discussed in detail in the book Lean Integration. We've come full circle. But I think we just might have successfully rescued productivity from being swept under the carpet.

•   •   •

Don't forget to join me at Informatica World 2012, May 15-18 in Las Vegas, to learn the tips, tricks and best practices for using the Informatica Platform to maximize your return on big data, and get the scoop on the R&D innovations in our next release, Informatica 9.5. For more information and to register, visit www.informaticaworld.com.

More Stories By Ash Parikh

Ash Parikh is responsible for driving Informatica’s product strategy around real-time data integration and SOA. He has over 17 years of industry experience in driving product innovation and strategy at technology leaders such as Raining Data, Iopsis Software, BEA, Sun and PeopleSoft. Ash is a well-published industry expert in the field of SOA and distributed computing and is a regular presenter at leading industry technology events like XMLConference, OASIS Symposium, Delphi, AJAXWorld, and JavaOne. He has authored several technical articles in leading journals including DMReview, AlignJournal, XML Journal, JavaWorld, JavaPro, Web Services Journal, and ADT Magazine. He is the co-chair of the SDForum Web services SIG.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.