Tuesday, December 19, 2017

The Mysteries of Data Transformation

In the beginning, long ago and far away, I thought all the data integration products had embedded Transformation Engines. After all, the biggest challenge, really, is making disparate data make sense together and able to align appropriately so that the data from sources is meaningful to the target consumer.

Today, data transformation and manipulation is even more critical for Data Virtualization than for ETL, since you have to get it right in one pass. ETL has the dubious “luxury” of adding however many steps and copies along the way that is needed to ease the pain, but it’s imperative to make sure that everything is in good order for cleaning, aligning, filtering, and federating before you present the virtual model for querying.

Chances are you are spending a lot of time and energy preparing and dealing with the dirty details of managing messy data with your current integration or data virtualization product. That's because I was wrong about every integration platform having a transformation engine. 


What is Data Virtualization without a Transformation Engine?
Think about it a bit. Without a legitimate transformation engine, Data Virtualization only can work in a perfect world, where data has been cleaned and where data naturally aligns without manipulation...Maybe you can get away with format differences.
OK, so if the data has already been cleaned, you are not actually getting the data from the source, right? And, isn’t it then carrying the latency of all that housekeeping? Isn’t that counter to what DV is all about?

Of course, there are times when the best overall solution is, in fact, to prepare a clean copy of the data set, and query against it. Often an ODS (Operational Data Store) is the best source to use exactly because proven cleansing algorithms already are in place.  Enterprise Enabler is the only integration Agile ETL™ platform and it can actually do the cleaning as well as the Data Virtualization…. Thanks to the robust embedded Transformation Engine!

Enterprise Enabler® (EE) Transformation Engine is the Great Orchestrator
Recently, I’ve been thinking a lot about our Transformation Engine, and I’ve come to believe that it may be the single most important asset of Enterprise Enabler. When we introduce the architecture and components of the EE platform, we tend to take it for granted, unwittingly doing a disservice to the Transformation Engine with a simple one-liner.  In fact, the Transformation Engine (TE) is the heart and brain of all the logic, run-time processing of data throughout Data Virtualization, Agile ETL™ and all modalities of integration.  We describe it as the conductor, orchestrating and issuing instructions as configured in the metadata.





T.E: “Hey, SAP AppComm!, bring me the data from TemplateA. Merci!” Now, Salesforce AppComm, get the data from Templates. Next, let’s apply the federation and validation rules on each data set and package it as for a federated queryable data model.” Oh, and while you’re at it, send that data directly physically to the Data Warehouse.” “Voila!”

Obviously, this is a simplification, and I may not have gotten the accent quite right, but that EE Transformation Engine is one smart cookie that outperforms the alternative solutions.




Just forget the Legacy Transformation Engines
The old fashioned “Rube Goldberg” process found in the traditional ETL products:
  •          Extract a data set from one source and put it in a data store.
  •          Write custom code to clean and align the data and post it to a database
  •          Repeat with each source…
  •          Invoke many separate specialized utilities for mostly  limited to format conversion

You can see that this legacy approach certainly cannot adapt to Data Virtualization, which must reach live directly into the sources and federate them en route.

What’s different about Enterprise Enabler’s Transformation Engine?
First, a couple of relevant aspects of the Enterprise Enabler platform.  EE is 100% metadata driven. You never need to leave Integrated Development Environment.  Since it is fully extensible to incorporate business rules, cleansing rules, formulas, and processes. It also means that every object is reusable and you can make modifications in a matter of minutes, or even seconds. EE’s single platform handles Data Virtualization, Agile ETL, EAI, ESB, and any hybrid or complex integration pattern. Data workflow orchestration and composite application designer round out the platform. This described framework means that there is a global awareness during execution that enables very complex logic and processing based upon the states of any aspect of the system.


Some of the capabilities of Enterprise Enabler Transformation Engine:



The Bottom Line
EE’s Transformation Engine streamlines and ensures end-to-end continuity in configuring and processing all data integration patterns, including Agile ETL and Virtual Data models, providing
  • Shorter time to value
  • Improve data quality
  • Rapid configuration 
  • Re-usability eliminating hand coding tools

It truly is the heart and brain of Enterprise Enabler. To learn more make sure to check out our Transformation Engine whitepaper (here).

Wednesday, August 9, 2017

Time to Replace and Rip? Yes!

Until recently, the concept of Rip and Replace always carried a terrific fear component. My mother would have rather heard a litany of curses than hearing “Rip and Replace” anywhere in sight of her data center. But, alas! Times change.

Informatica, IBM, Tibco and others, have gone the way of punched cards, Cobol, and Fortran. Data Warehouses have served well now, for a couple of decades, but the overhead and slowness continue to build up tech debt as tech teams fail to keep up with the requisite pace of business. Some businesses will keep trying to cajole their ancient software to mimic today’s technologies as they plod forward trying to remain competitive. They won’t succeed though. You just can’t squeeze agility out of a pipe wrench. 


I’m convinced that if my mother had met Data Virtualization, for instance, before she walked out the door, she would have been the first to jump in. She always embraced new ideas, but she also exercised a pragmatic skepticism.

Well Ma, it’s time. All the smart companies are doing it. Not Rip and replace, really. It’s more like Replace and Rip.

What I’m talking about is an orderly modernization path that surprisingly quickly replaces those ancient approaches to data integration that businesses put so much effort into, not to mention money. Huge teams still are spending years integrating across multiple systems, and the cost of every small modification could feed an army. It’s time to get serious about this relatively new Data Virtualization(DV) paradigm. If you don’t know about DV, better wake up and check it out. And while you’re at it, take a look at Agile ETL™. The two together will take you quickly from what Gartner calls your “Mode one” clunky IT infrastructure to a “Mode 2,” embracing mobile, IoT, Cloud/hybrid and all manner of digital.


Here’s a quick overview of DataVirtualization sometimes referred to as "Logical Data Warehouse:" Instead of gathering data physically into a staging database or warehouse, a virtual data model is defined, and all of the participating data sources are logically aligned with transformations, validations, and business rules. The virtual models are packaged as OBC, JDBC, Odata, and other services. When the virtual data model is queried, the DV reaches out live to the sources, applies all the configured rules, resolves the queries, and delivers the data to the calling program. Many companies are getting familiar with DV by leveraging it for their latest wave of Business Intelligence and Analytics.  

Here is a quick overview of Agile ETL: There is finally a technology to significantly streamline ETL. That is to leverage the same type of Federation used in DV, for moving data physically to another application or database. Stone Bond’s Enterprise Enabler® (EE) supports rapid configuration of the Federation, validations, and business rules, which are executed live across all the sources, and delivers the data in the exact form required by the destination live, without any staging. Just think about the amount of infrastructure that you can eliminate. Ma would be all over it!

So, there you are:  Rip out the old and Slip in the new. Or rather, Slip in the new and Rip out the old.


Friday, June 9, 2017

Agile MDM - 1 Myth & 2 Truths

MDM using Data Virtualization

If there’s anything that can benefit from agility, it is most certainly Master Data Management. Untolled MDM projects flat-out failed over the last 15 years? Moreover, why? Must be largely because any dynamic corporation (growing business constantly changes) and with change comes the demand for Master Data to reflect the current reality of at hand immediately. That’s not possible with legacy methodologies and tools.

With the advent of Data Virtualization, “Enterprise Master Services” or “KPIs” are always fresh and accurate ( with the most recent information). This approach significantly reduces the number of copies of data, thereby reducing the chance of discrepancies across instances of data. Data remains in the original sources of record and is accessed as needed on demand for Portals, BI, Reporting, and Integration.

Furthermore, it is not really necessary to define an "everything to everybody" Master definition. Think about it more like an organic approach, growing and changing the models, creating new versions for specific use cases or constituents. The key there is that Enterprise Enabler® (EE) tags every object with notes and keywords as well as the exact lineage, so that a search will find the best fit for the use.

Doesn’t Data Virtualization mean you’re getting a Streaming Data Point?

No, it does not, this is the myth. I often hear the following concern: “If I want to get the KPI, I don’t want just the current live value, I want the last month value or even some specific range of days.”  The answer is, Data Master is actually a virtual data model defined as a set of metadata that indicate all of the sources, and all of the security, validation, federation, and transformation logic. When the virtual model is queried, Enterprise Enabler® reaches out to the endpoints, federates them, applies all other logic, resolves the query, and returns the latest results. So the data set returned depends on the query. In other words, a Master Data Service/Model resolves the query and retrieves data live from the sources of record, and delivers the latest data available along with the historical data requested.

In the case where the model consists of real-time streaming data, of course, you are interested in the live values as they are generated. These models still apply business logic, Federation, and such, and you have some way to consume the streaming data, perhaps continuous updates to a dynamic dashboard. However,  that’s not what makes MDM Agile.

The Challenge of Change

The more dynamic your business, the more important agility becomes in Master Data Management. Applications change, new data sources come along, processes and applications move to cloud versions. Companies are acquired, and critical business decisions are made that impact operation and the shape of business processes. All of these changes could mean updates need to be applied to your Master Data Definitions. The truth is, with legacy MDM methodologies (the definition, programming, and approvals) will be calculated in months, while you are impeding the progress and alignment of new business processes.

What’s the “Agile” part of Enterprise Enabler's MDM?

Agile MDM is a combination of rapidly configuring metadata-based data Masters, efficiently documenting them, “sanctioning” them and making them available to authorized users. Ongoing from there, it is a matter of being able to modify data masters in minutes with versioning, and moving to the corrected or updated service/model. It’s also about storing physical Master data sets only when there is a true need for them.

Ready for the second truth? When you use an Agile Data Virtualization technology such as StoneBond’s Enterprise Enabler®, along with proper use of its data validation and MDM processes for identifying, configuring, testing, and sanctioning Data Masters, you are applying agile technology, and managed agile best practices, to ensure a stable, but flexible, MDM operation. Enterprise Enabler offers the full range of MDM activities in a single platform.

The diagram below shows the basic process for Agile MDM that is built into the Enterprise Enabler.


Step 1.  A programmer or DBA configures a Data Master as defined by the designated business person.

Step 2. The Data Steward views lineage, authorization, tests, augments notes and sanctions the model as an official Master Data Definition.

Step 3. The approved Data Master is published to the company MDM portal for general usage.

Thursday, April 6, 2017

5 Reasons you should leverage EE BigDataNOW™ for your Big Data


Big Data has been swooping the BI and Analytics world for a while now. It’s touted as the better way of Data Warehousing for Business Intelligence (BI) and Analytics (BA) projects. It has removed hardware limitations on storage and data processing. Not to mention, it has broken the barriers of schema and query definitions. All of these advancements have sprung the industry in a forward direction.

Literally, you can dump any data in any format and start building analytics on the records. We mean any data whether it’s a file, table, object, or in any schema into Hadoop.



1. EE BigDataNOW™ will organize your Big Data repositories no matter the source

Ok, so everything is good until you realize all your data is sitting in your Hadoop clusters or Data Lakes with no way out; how are you supposed to understand or access your data? Can you even trust the data that is in there? How can you ensure everyone who needs access has a secure way of retrieving the data? How do you know if the data is easy to explore and understand for the average user?
Most importantly, how do you start exposing your Big Data store with API’s that are easy to use and create? These are some of the questions you are faced with when you want to make sense of you Big Data repositories.

Stone Bond’s EE BigDataNOW™ helps you achieve the “last-mile” of your Big Data journey. It helps you organize your Big Data repositories, whether in a Lake, in the cloud or on-premise, EE helps make sense of all the data for your end users to access. Users will be able to browse the data with ease and expose it through APIs. EE BigDataNOW™ lets you organize the chaos and madness that the data loading individuals uploaded.

2. Everyone is viewing and referencing the same data

For easy access to the data, Stone Bond provides a Data Virtualization Layer for your Big Data repository that organizes the data into logic models and APIs. It lets you provide a mechanism for administrators to build logical views with secure access to sensitive data. Now everyone is seeing the same data and not different versions of it. This reduces the confusion by providing a clear set of Master Data Models and trusted data sets that are sanctioned to have the accurate data for their needs. It auto-generates APIs for the models on the fly so users can access the data through SOAP/REST or OData and be able to build dashboards and run analytics on the data. It also provides a clean queryable SQL interface, so users are not learning new languages or writing many lines of code. It finally brings a sense of calmness and sureness that is needed for true Agile BI development.

3. It’s swift … did we mention you access & federate your data in real-time?

EE BigDataNOW™ can be a valuable component on the ingestion side of the Big Data store too; it will federate, apply transformations and organize the data to be loaded into the Data Lake using its unique Agile ETL capabilities, making your overall Big Data experience responsive from end to end. EE BigDataNOW™ has a fully UI driven, data workflow engine that loads data into Hadoop whether its source is streaming data or stored data. It can federate real-time data with historical data on demand for better analysis.

4. Take the load off your developers

One of the major complexities that Big-Data developers run into is building and executing the Map-Reduce jobs as part of the data workflow. EE BigDataNOW™ can create and execute Map-Reduce jobs through its Agile ETL Data Workflow Nodes; this will help run Map-Reduce jobs and store results in a meaningful, easy way for end users to be able to access the Map-Reduce jobs.



5. EE BigDataNOW™ talks to your other non-Hadoop Big Data sources

EE BigDataNOW™ includes non-Hadoop sources such as Google Big Query, Amazon Redshift, SAP HANA, etc. EE BigDataNOW™ can also connect to these nontraditional Big Data sources, and populate or federate data from these sources for all your Big Data needs.

To read more about Big Data, don’t forget to check out Stone Bond’s Big Data page. What are you waiting for? Break through your Big Data barriers today!


This is a guest blog post written by,

Monday, February 13, 2017

Did You See the Gartner Market Guide for Data Virtualization?

Gartner’s Market Guide to Data Virtualization (DV) that was published a few months ago was really a “coming of age” milestone for that relatively unknown data integration pattern. With the data explosion on all fronts, the traditional tools and patterns such as ETL, EAI, ESB, or Data Warehouse are mostly obsolete. To download and read full Gartner Market Guide for Data Virtualization click here

Unfortunately, it looks like we’re entering another déjà vu scene, where the next "best way" to handle integration problems is hyped as one more stand-alone category of integration. Remember how we had to decide, before initiating a new project, whether the problem required ETL, ESB, or SOA? Bear in mind that it was never that cut-and-dry; every project needed a little of each, so you just picked one. Then you realized you had to have three different tools and vendors, not to mention plenty of custom coding and timelines, counted in years, to get to the desired end, if at all.  In my experience, no architecture can rely solely on a single integration pattern. Most DV tools focus exclusively on Data Virtualization. There may be a vendor that offers tools in each category, but those are typically separate tools that don’t share objects and functionality.

Stone Bond Technologies has always considered integration as a continuum. There is a huge body of capabilities that are necessary for every single pattern.  You always have to access all manner of disparate data sources; you always have to align them to make sense of them together; you always need to apply business rules and validations. You need to make sure the formats and units of measure are aligned … and on and on. Then you need data workflow, notifications, and events. You need security at every turn. That’s where Enterprise Enabler started – as a technological foundation that handles these requirements without staging the data anywhere, and that virtually eliminates programming. With that, delivering as DV, ETL, EAI, ESB, or SOAP is not so difficult. Most integration software, on the other hand, starts with a particular pattern and ends up adding tools or custom coding to figure out "The Hard Part."

It turns out that Data Virtualization demands that multiple disparate data sources be logically aligned in such a way that together they comprise a virtual data model that can be queried directly back to the sources.

I like the diagram that Gartner included in the Guide (To view Gartner's diagram and read the full Market guide, click here). Below is a similar image depicting Stone Bond’s Enterprise Enabler® (EE) integration platform in particular. Note, the single agile Integrated Development Environment (IDE) covers all integration patterns, and is 100% metadata driven. The only time data is stored is when it is cached temporarily for performance or for time-slice persistence.


Enterprise Enabler®


Refer to the above diagram for a few additional things you should know about Enterprise Enabler:
  • As you can see, all arrows depicting data flow are bi-directional in this diagram. EE federates across any disparate sources, and also can write back to those sources with end-user awareness and security.
  • IoT is also included as part of the source list. Anything that emits a discernible signal can be a source or destination
  • AppComms™  are Stone Bond’s proprietary connectivity layer. An AppComm knows how to intimately communicate with a particular class of sources (e.g., SAP,   Salesforce, DB2, XML, and hundreds of others) including leveraging application-specific features. It also knows how to take instructions from the Transformation Engine as it orchestrates the federation of data lives from the sources.
  • The Transformation engine manages the resolution of relationships across sources and the validation and business rules.
  •  EE auto-generates and hosts the DV services
  • Data Virtualizations and associated logic can be re-used as Agile ETL with a couple of clicks. Agile ETL leverages the federation capabilities of DV without staging any data.
  • EE includes a full data workflow engine for use with Agile ETL or seamlessly inserted as part of the overall DV requirements.
  • EE has a Self-Serve Portal which allows BI users to find and query appropriate virtual data models
  • EE monitors endpoints for schema changes at touch-points where data is used in any of the DV services or Agile ETL. You’ll be immediately notified with detailed import analysis. (patented Integration Integrity Manager) 

Thursday, January 5, 2017

Even Beyond the Logical Data Warehouse


What is a Logical Data Warehouse? There is still much uncertainty and ambiguity around this subject. Or, perhaps I should say, there should be.

Instead of trying to lock down a definition, let’s take advantage of the opportunity to think about what it CAN be. It is the role, if not the obligation, of Experts to describe the essence of any new discipline. However, in the case of LDW, a premature assessment is likely to sell short the potential reach and extensibility of the contribution of Data Virtualization (DV) and Federation to the entire universe of data and application integration and management.

Certainly, the players with the biggest marketing budgets are likely to spread a limited, but compelling, definition and set of case studies, which could become the de facto discipline of Logical Data Warehouse. While these definitions may represent a significant step forward for data management, they would be limiting the full potential of what these new models could bring to the marketplace.

I fear, however, a repeat of the biggest historical impediment to realizing a universal data management framework. Each new “wave” of innovation has been blindly adopted and touted as the single best approach ever. ETL only went so far, then EAI came along as a separate technology (to save the world), Data Warehouse (to store the world) then SOA (to serve the world), and now Data Virtualization and Logical DataWarehouses (to access data faster and with more agility). In this case of Data Virtualization and Logical Data Warehouse, we owe it to our fellow technology implementers to leverage every aspect possible, to advance the cause of the ultimate data integration and management platform.


If we look at all of the data integration patterns, don’t we see that there is a tremendous amount of functionality that overlaps all of these patterns? Why do we even have these distinctions?

What if we seize this DV/LDW revolution as the opportunity to reinvent how we think about data integration and management altogether? Consider the possibility of a platform where:

LDW is a collection of managed virtual models 

  
  • These can be queried as needed by authorized users.
  • The same logic of each virtual model is reusable for physical data movement
  • Virtual data models incorporate data validation and business logic
  • Staging of data is eliminated except caching for performance
  • Virtual data models federate data live for ETL
  • Virtual data models and accompanying logic can be designated, or “sanctioned” as Master Data definitions
  • Master Data Management eliminates the need for maintaining copies of the data
  • Golden Records are auto-updated, and in many cases, become unnecessary
  • With the “write-back” capabilities, data can be updated or corrected in either end user applications/dashboards or by executing embedded logic
  • Write-back capabilities mean that anytime a source is updated, all of the relevant sources can be synchronized immediately also. (Imagine that eventually, the sync  process as we know it today simply disappears.)
  • Complex data workflows allow the use of virtual models and in-process logic to be incorporated into the LDW definitions.
  • These logic workflows handle preventive and predictive analytics as well as application and process logic
  • Data Lineage is easily traced based on traversing the metadata that describes each virtual model. 
  • Every possible source: applications, databases, instruments, IoT, Big Data, live streaming data, all play seamlessly together.
         Oh, and LDW is pretty cool for preparing data for BI/BA also!


We at Stone Bond Technologies have been leaders in Data Federation and Virtualization for more than ten years. We believe it is our responsibility to remove all obstacles, allow data to flow freely, but securely wherever and whenever it is needed. Our vision has always been a single, intimately connected, organic platform with pervasive knowledge of all of the data flowing throughout the organization, whether cloud, on premise, or cross-business; applications, databases, data lakes.. any information anywhere.

Being too quick, individually or collectively, to take a stand on the definition of Logical Data Warehouse is likely to abort the thought process that is still ripe with the opportunity to take it way beyond the benefits that are commonly extolled today.