Using Apache Spark and ThingSpan To Relieve Network Congestion

Using Apache Spark and ThingSpan To Relieve Network Congestion

Telecommunications voice and data networks are natural examples of graph structures: equipment of many types, often from hundreds of manufacturers, must work in harmony to reliably and efficiently transport information for millions of users at a time. Objectivity products have been used at the heart of fiber optic switches, cellular wireless and low earth satellite systems, long-term alarm correlation systems and in network planning applications.

Dealing with problems (alarms) or overloads has traditionally involved taking individual pieces of equipment offline and re-routing the traffic via other nodes. In this example, we’ll look at an apparently simple situation and show how the combination of Spark SQL and ThingSpan’s advanced graph navigation can be used to quickly diagnose and solve an equipment overload situation. We start by loading Location, Equipment and Link (plus loading percentages) objects and connections into ThingSpan, producing the following graph in Figure 1.

Making Offers They Can’t Refuse: Using Spark and Objectivity’s ThingSpan to Increase Retail Product Sales

Making Offers They Can’t Refuse: Using Spark and Objectivity’s ThingSpan to Increase Retail Product Sales

Retailers have deployed advanced business intelligence tools for decades in order to determine what to sell and to whom, when, where and at what price. Much of the transactional data was too voluminous for smaller retailers to keep for long, putting them at a disadvantage against the industry giants and more agile web-based retailers. The falling prices of commodity storage and processors are making it possible to keep data longer. This data can also be combined with external sources, such as information gathered from social networks, then analyzed by more powerful machine learning technologies and other tools.

In this blog, we will look at how any retailer—traditional or online—might identify slow-moving products and use their own sales transaction data in conjunction with social media information about bloggers who have mentioned or bought a product in order to identify and target potential buyers.

Online Dating: Relationship Analytics in the Real World

Online Dating: Relationship Analytics in the Real World

At the start of the new year, my colleague Nick Quinn, our principal engineer here at Objectivity, examined signaling in applications that provide recommendations as a use case for graph databases in his blog, “Peacocking.” He used the peacock’s plumage as an analogy to how we use online dating sites to express mutual romantic interest.

I felt that since it’s the week before Valentine’s Day, it would be timely to revisit the topic of online dating as a treasure trove for big data, and specifically, relationship and graph analytics. This time, however, I’m approaching it from a woman’s perspective—because let’s face it, no matter how flashy male peacocks look, it’s their female counterparts who possess the most decision-making power in selecting a mate.

It’s no surprise that the online dating ecosystem is generating massive volumes of data. According to datascience@berkeley, five of the largest sites (eHarmony, Match, Zoosk, Chemistry, and OkCupid) had between 15-30 million members each. Online dating apps are even more impressive, with Tinder leading the pack at an estimated 50 million users making 1 billion swipes and 12 million matches per day!

Is Your Database Schema Too Complex?

Is Your Database Schema Too Complex?

Introduction

In the majority of today’s commercial software applications that require data persistence, a significant portion of time is spent designing and integrating the database with the application. This task typically involves:

Designing a large, normalized database schema in a Relational Database Management System (RDBMS) using a tool such as an Entity Relationship Diagram (ERD).
Implementing the database with associated Views, Stored Procedures, Triggers, Constraints, etc.
Implementing a mapping layer between the database tables and the application class model (either manual code or an ORM such as Hibernate or Entity Framework).
Performing iterations over the previous items as changes are made to the schema.
Maintaining the database by monitoring and modifying disk space, index usage, transaction log size, etc.
These tasks are very complex and require a significant investment of development resources to perform; they typically require senior-level application and database developers.

How Graph Analytics Can Help the Finance Sector Innovate

How Graph Analytics Can Help the Finance Sector Innovate

Over the last decade, financial institutions have increasingly relied on data and data analytics to gain a competitive edge, as well as to minimize their exposure to risk and compliance issues. For instance: institutional investors apply sophisticated data analytics algorithms to historical data and streaming data, trying to find patterns and associations between the data, in order to determine which stocks to buy and sell. According to the Wall Street Journal, algorithmic trading now accounts for roughly one-third of foreign currency exchanges. When it comes to their own business operations, financial institutions use data tools for tracking compliance, assessing risk, and detecting potential fraud or security breaches.

For these reasons, many institutions are now utilizing data analytics platforms to help them make informed decisions based on contextual analysis of historical and real-time data. An enterprise-class graph analytics platform is a vital addition to this toolset as current methods generally rely on statistical pattern-matching. A graph approach would allow detection of cause-and-effect chains that are apparent by analyzing transactions and stock prices.