Graph databases are becoming more popular as a way of storing and analysing large connected datasets.
Neo4j is a popular Graph DBMS because of its powerful querying language: Cypher and its growing community and excellent supporting tools.
A new paradigm comes with a new set of challenges. In this case we are focused on the challenge of creating a data pipeline to load data into Neo4j, thinking about how we might design our schema and how we might query it.
Today, we’ll take you through an ETL of The Northwind Traders Sample Database and some of the things we can do with Cypher that makes it special and worth a look.
We’ll be using Kettle to orchestrate our ETL and making use of the Neo4j Output Plugin to help us interact with our Neo4j server. We’ll also load the same dataset to PosgreSQL, to see how the two technologies compare.
Environment Setup
The installation of Kettle, Neo4j Output Plugin and Neo4j server is outside the scope of this post but it is important to note that the connection is not stored in the transformations as they usually are. It’s stored in the metastore which is found in your home folder. For that reason, you’ll have to create a new connection before you can load data into Neo4j.
Once you can test your connection you are good to go, just make sure the connection name is local-hardcoded
so that you don’t need to change any of the transformations provided at the end of the post.
To set up your PostgreSQL connection you’ll need to edit the conf/kettle.properties
files with the correct details for your connection. This will change the connection for each of the transformations and the main job as we have parameterised the connection for your convenience. If you do not have a password you can remove everything after DATABASE_PASS
from the file and it will work fine.
With that out of the way, let’s get started with Northwind.
Northwind Traders Database
The Northwind Traders database is a sample database that comes with Microsoft Access. The database contains sales data for a fictitious company called Northwind Traders.
As you can see it represents an OLTP focused on parts of orders.
We will be using a version of Northwind compiled by Neo4j as csv files, available on GitHub, which is more or less the same as the original and the image above.
Loading Data
The ETL we have created is relatively simple; there is a single job, RUN.kjb
, that runs, in sequence, the transformations that load nodes and relationships into Neo4j from the csv files.
Similarly, there is a single job for loading into Postgres; one important difference is that for Postgres we must create our dimension and fact schemas beforehand using SQL create statements. In Neo4j this is not necessary because graph databases are schema-less.
We’ll be using the Neo4j Output step to create nodes and relationships and the Neo4j Cypher step for lookups.
The sequence of steps below shows a common pattern to ensure that poorly delimited and enclosed csv files produce the expected columns and rows:
Address Nodes
Addresses occur often within Northwind: suppliers, customers, shipment receivers and employees all have addresses. For that reason we have tried to create a generic way to link an address, an actor that allows for performant geographic queries.
Rather than have addresses and their components (country, region, city, postcode, street and building name) stored as a property for each customer, supplier, etc. each address is stored as its own node. Each address node is linked to a city node and each city to a region and so on; this creates a hierarchy, using composite keys to make sure the same region isn’t linked to two countries. The result of this is easier indexing which speeds up queries because searches on nodes are faster than searches on properties.
After we have collected all the addresses from all the files, we remove duplicates and replace any null regions (some countries have only one region, which is null) with the country name. We create the region nodes and link them to their appropriate country node, do the same for cities -> regions and addresses -> city nodes.
We now have a more efficient way to query locations and slice our data.
Relational Addresses
In Postgres we need to do little to get addresses into our dimensions. If it wasn’t for other complications such as poorly delimited and enclosed csv files and replacing null regions it would be as simple as table input -> table output
. This is because each address is stored in the same dimension as the other information for that actor, i.e. customer addresses are stored in the customer dimension.
Date Nodes and Date Dimension
Creating date nodes is identical to any date dimension that you have created in the past except that you are creating nodes instead of rows. The result is a node with many properties that allow you to query in a variety of ways without having to do on the fly date operations:
In PostgreSQL, we populate our date dimension with the same fields as in Neo4j but each date is a row not a node.
Order Nodes and Relationships
In our graph, order nodes are the most connected node; you could compare this node to a fact table that contains no additive fields (also known as a factless fact table)
We start by joining order-details to orders so that we can create all the nodes and relationships we want in one go.
Next, we do some lookups on previously created nodes so that we can link the order nodes to date nodes and shipment receivers.
Calculate totals from unitPrice, discount and quantity; this reduces query time because values are precalculated. Create our order nodes and their relationships to products (remembering that the CONTAINS_PRODUCT
relationship uses part orders coming from order-details.csv
)
All the additive fields apart from freight are stored in our CONTAINS_PRODUCT
relationship between Order and Product nodes. This is the most logical location to store these properties unless we wanted to create a Part Order which would only increase traversal and reduce performance of queries.
Finally, we create all the relationships between our order nodes and the other nodes we created before using a sequence of Neo4j Output steps.
It’s worth noting that we store our freight costs, another additive field, in the relationship between the Order and ShipmentReceiver: SHIPPED_TO
. This allows us to maintain additivity without introducing complications surrounding the freight field as you will see later on.
The schema we have created looks like this:
Part Fact Orders
In PostgreSQL, we still join orders to order details to get part orders; however, we must isolate a single freight value for each order so that freight is additive. Some of the options here are:
- Split the freight evenly between the products contained in an order.
- This is misleading as packaging and shipping costs are usually dependant on size and/or weight of the package so we want to avoid splitting evenly.
- Split the freight proportionately between the products contained in an order.
- This is the ideal scenario but it is not possible because we do not have weight or size information for the products, unfortunately.
- Store freight with only a single part order.
- This is the compromise we chose as it maintains the additivity of the freight field and is less misleading.
As you can see, all options mean we cannot do analysis of freight costs per product.
We accomplish this using a changing sequence and a javascript calculation.
Relational data warehouses depend heavily on surrogate keys to join facts to dimensions. For each dimension we have created a sequence for this purpose.
When we create the fact table we lookup these sequences so we can add them to the fact table. This is a distinct difference between Neo4j and relational databases as Neo4j manages its own keys to identify which relationships are connected to a given node.
Finally, before loading to the table we create a sequence to be the primary key for the fact table.
Querying
Let’s look at how we can query our newly created databases.
Value of sales for each year from customers in the USA
In Cypher:
MATCH (p:Product)<-[r]-(o:Order)--(:Customer)--()--()--()--(c:Country)
WHERE toLower(c.country) = "usa"
WITH o AS order, c.country AS country, r AS rel
MATCH (order)-[:ORDERED_ON_DATE]->(d)
RETURN
country, d.calendarYear AS year
, count(DISTINCT order) AS number_of_orders
, apoc.number.format(sum(rel.netAmount), '$#,##0.00', 'en') AS
value_in_dollars
ORDER BY year ASC
In PostgreSQL:
SELECT
customers.country AS country
, dates.calendar_year AS year
, count(DISTINCT orders.order_nk) AS number_of_orders
, cast(sum(orders.net_amount) AS money) AS value_in_dollars
FROM
public.fact_part_orders AS orders
, public.dim_customers AS customers
, public.dim_date AS dates
WHERE orders.customer_id = customers.customer_id
AND orders.order_date_id = dates.date_id
AND lower(customers.country) = 'usa'
GROUP BY country, year
order BY year ASC;
We can refactor our schema to include direct relationships between orders and cities, orders and regions, orders and countries, giving us a quicker way to retrieve the same results. After you do this the matching pattern changes from (p:Product)<-[r]-(o:Order)--(:Customer)--()--()--()--(c:Country)
to (p:Product)<-[r]-(o:Order)--(c:Country)
and the performance boost would be significant as there are less hops to traverse and fewer searches to complete.
Products most likely to be bought together
In Cypher:
MATCH
p=(original:Product)--(:Order)--(related:Product)
WHERE
toLower(original.productName) = "teatime chocolate
biscuits"
RETURN
DISTINCT original.productName AS product
, related.productName AS most_likely_to_be_bought_with
, count(p) AS popularity
ORDER BY
popularity DESC
, most_likely_to_be_bought_with DESC
LIMIT 5
In PostgreSQL:
SELECT
original.product_name AS product
, related.product_name AS most_likely_to_be_bought_with
, count(r_orders.order_nk) AS popularity
FROM
public.dim_products AS original
, public.dim_products AS related
, public.fact_orders AS o_orders
, public.fact_orders AS r_orders
WHERE
original.product_id = o_orders.product_id
AND o_orders.order_nk = r_orders.order_nk
AND r_orders.product_id = related.product_id
AND lower(original.product_name) = 'teatime chocolate biscuits'
AND lower(related.product_name) <> 'teatime chocolate biscuits'
GROUP BY
original.product_name
, related.product_name
ORDER BY
popularity DESC
, most_likely_to_be_bought_with DESC
LIMIT 5;
Isn’t that a mouthful.
This type of query has become common in online shopping; the shop will recommend products based on what you are looking at or what you have in your cart.
As you can see in Cypher the Products most likely to be bought together query is more compact. Importantly, this makes querying far less error prone; accidentally running a cross-join because you forgot a join condition can go unnoticed and be very costly.
In SQL, fewer joins will lead to the best performance, especially when your fact table has several billions of rows (or you’re joining the fact table to itself like we are here). Neo4j does not have the concept of joins because there are no tables. Graph queries are easier to write, read and modify which is why recommendation queries work well in graph databases.
Improvement Strategies
This schema is a good start and allows us to think about how to use Neo4j to analyse our data. In loading the data into Neo4j, we have come up with new ideas that have not been implemented as of yet.
Firstly, aggregation nodes could be a useful way to query old data quickly by storing pre-calculated values for later; these nodes play the same role as aggregation tables do in a relational database. In a schema-less model, we can add new nodes easily without building new tables making aggregation a valuable strategy.
The most simple version of this is to calculate the total value of an order and store it in that order, this should improve query time.
We can create these nodes on several aggregation levels, e.g. Yearly Sales, Monthly Sales, Daily Sales, etc.
There is also a possibility of creating geographic aggregation nodes, e.g. USA Sales, London Sales etc.
Separating date nodes into year, month and day nodes is another strategy; this should allow performant querying for specific years and months as a search through all properties of date nodes is not necessary.
Finally, creating a LinkedList between nodes of the same type may prove to be valuable. For example, (:Year)->[:NEXT_YEAR]->(:Year)
allows you to compare one years sales to the previous years sales; the same can be done for previous and next month or previous and next day. Thus we can make use of reduced hop traversal to improve query performance when interested in sequences of dates. This is quite difficult to implement in a relational model as each comparison to a previous period and future period will require an additional column on the date dimension.
Conclusion
- Cypher queries are less error-prone because its more difficult to miss join conditions when SQL-style
FROM
andJOIN
are expressed through a single pattern. - Kettle has a nice plugin to visualise and perform your output to Neo4j.
- We can optimise our graph for a number of different queries without impacting overall performance.
- The flexible, schema-less nature means changes can be made without refactoring the whole ETL.
- Graphs produce efficient recommendation queries.
- There are many improvements yet to explore.
Next Steps
- Compare performance on large, connected datasets between relational and graph databases.
- Load and query databases built from the ground up for connected use cases: social media, map navigation, city planning.
- Explore hybrid schemas (relational when needed, graph when appropriate) with a virtualisation layer.
- Optimise Date nodes for different use cases.
Where to get the code
The Neo4j ETL can be downloaded here: loading_northwind_neo4j.zip
The PostgreSQL ETL can be downloaded here: loading_northwind_postgres.zip