In Part 1 of this blog series, I looked at the fundamental principles behind all database technologies and the evolution of DBMSs as system requirements changed. In this concluding article, I’ll address the enormous changes in requirements that Objectivity is seeing and suggest some ways of attacking the problems that they are introducing.
The Rise of Big Data
Dramatically increased and still growing use of the WWW has made it necessary for companies to gather and analyze a much wider variety of data types than ever before. They also need to store and process more data in order to garner business intelligence and improve operations. This introduces an additional data generator, the data center and communications infrastructure, which can produce voluminous logs from multiple sources.
Many of these “Big Data” systems operate on huge volumes of relatively simple data, much of it requiring conversion, filtering or consolidation before it can be used for analytical purposes. In the early days, much of this new data was stored in structured files. Hadoop, with its MapReduce parallel processing component and the scalable, robust Hadoop Distributed File System (HDFS), rapidly gained momentum making it the mostly widely used framework for Big Data systems.
In the year 2000, it seemed that database technology had matured to the point where changes were incremental, at least in the enterprise. Today, there is such a wide choice of database management systems (DBMSs) and other storage technology that it is hard to determine the best fit for a particular problem. Some wonder whether we have a surfeit of databases. In this blog series, I’ll look back at how we arrived where we are today, examine the new demands and challenges that we are facing, and suggest some lines of attack using Objectivity’s suite of database platforms.
Until the mid-1970s, most systems were built using functional systems. Object-oriented systems were introduced with a flurry of promises in the early ‘80s, many of which actually proved to be true, for once.
More recently, people have been talking about object-based systems, object stores and object-based file systems. In this article, I’d like to clarify the characteristics of each type of technology. Truth in advertising—there’s a lot of overlap, so I’ll try to smooth out the bumps in the ride.
These days, most large organizations have a plan for big data integration (see Figure 1), that is, to collect and analyze their big data assets from many sources: For instance, e-commerce businesses have the tools to sort through CRM databases for order logs, customer correspondence, and delivery information, and can pair that data with historical weather records to assess how the temperature impacts when customers are most likely to order certain products, or how changes in weather have historically impacted delivery schedules.
In my last blog, I covered the last ten years working with a particular Objectivity customer. In this blog, I want to go back even further, more than 25 years ago to the start of Objectivity.
Let me start by presenting a brief timeline:
The 1980s: Object technologies became popular, although even prior to that time, objects had been used in some significant projects. These technologies included languages, modeling, tools and databases.
1989: The Object Management Group (OMG) was formed to coordinate and standardize efforts among multiple organizations in different verticals, all trying to leverage the power of objects.
1996: The Unified Modeling Language (UML) was accepted by the OMG, unifying modeling methods from luminaries like Grady Booch, Ivar Jacobson and James Rumbaugh.
2005: A Task Force was set up with OMG to bring together multiple tools in the Business Process space.
2011: The Cloud Standards Customer Council was created.
So suffice it to say that objects have been around for a long time and are still here.