Generate reliable and meaningful insights from a solid, secure and scalable infrastructure. Our team of 25+ Data Engineers is ready to implement, maintain and optimise your data products and infrastructure end-to-end.
Data warehouse migration or implementation
Set up a data warehouse or data lake in the cloud. Our technical specialists build the right solution for your organisation. We create a plan to integrate all the sources you have and connect your existing data products.
Data governance & data architecture
Rely on the experience of our architects and developers and apply best practices in data governance and architecture. We implement data quality and other governance processes in new platforms. We also offer help in the form of advice and implementations of already existing platforms.
Machine learning operations
Our data science and engineering specialists help you bring machine learning into production. Turn one-off insights into structural added value from your data and guarantee the quality of your model output.
Data Engineering challenges we can solve for you
Converting billions of streams into actionable insights with a new data & analytics platform
Merlin is the largest digital music licensing partner for independent labels, distributors, and other rightholders. Merlin’s members represent 15% of the global recorded music market. The company has deals in place with Apple, Facebook, Spotify, YouTube, and 40 other innovative digital platforms around the world for its’ member’s recordings. The Merlin team tracks payments and usage reports from digital partners while ensuring that their members are paid and reported to accurately, efficiently, and consistently.
20% fewer complaints thanks to data-driven maintenance reports
An essential part of Otis's business operations is the maintenance of their elevators. To time this effectively and proactively inform customers about the status of their elevator, Otis wanted to implement continuous monitoring. They saw great potential in predictive maintenance and remote maintenance.
Insight into the complete sales funnel thanks to a data warehouse with dbt
Our consultants log the assignments they take on for our clients in our ERP system AFAS. In our CRM system HubSpot, we can see all the information relevant before signing a collaboration agreement. When we close a deal, all the information from HubSpot automatically transfers to AFAS. So, HubSpot is mainly used for the process before entering a collaboration, while AFAS is used for the subsequent phase. To tighten our people's planning and improve our financial forecasts, we decided to set up a data warehouse to integrate data from both data sources.
Valuable insights from Microsoft Dynamics 365
Agrico is a cooperative of potato growers. They cultivate potatoes for various purposes such as consumption and planting future crops. These potatoes are exported worldwide through various subsidiaries. All logistical and operational data is stored in their ERP system, Microsoft Dynamics 365. Due to the complexity of this system with its many features, the data is not suitable for direct use in reporting. Agrico asked us to help make their ERP data understandable and develop clear reports.
A standardised way of processing data using dbt
One of the largest online shops in the Netherlands wanted to develop a standardised way of data processing within one of its data teams. All data was stored in the scalable cloud data warehouse Google BigQuery. Large amounts of data were available within this platform regarding orders, products, marketing, returns, customer cases and partners.
Reliable reporting using robust Python code
The National Road Traffic Data Portal (NDW) is a valuable resource for municipalities, provinces, and the national government to gain insight into traffic flows and improve infrastructure efficiency.
Setting up a future-proof data infrastructure
Valk Exclusief is a chain of 4-star+ hotels with 43 hotels in the Netherlands. The hotel chain wants to offer guests a personal experience, both in the hotel and online.
A scalable data platform in Azure
TM Forum, an alliance of over 850 global companies, engaged our company as a data partner to identify and solve data-related challenges.
A fully automated data import pipeline
Stichting Donateursbelangen aims to strengthen trust between donors and charities. They believe that that trust is based on collecting money honestly, openly, transparently and respectfully. At the same time effectively using the raised donation funds to make an impact. To further this goal, Stichting Donateursbelangen wants to share information about charities with donors through their own search engine.
Central data storage with a new data infrastructure
Dedimo is a collaboration of five mental healthcare initiatives. In order to continuously enhance the quality of their care, they organize internal processes more efficiently. Therefore, they use perceptions from the data that is internally available. Previously, they acquired the data themselves from different source systems with ad hoc scripts. They requested our help to make this process more robust, efficient and to further professionalise it. They asked us to facilitate the central storage of their data, located in a cloud data warehouse. The goal was to set up the data infrastructure within this environment, since they were already used to working with Google Cloud Platform (GCP).
Improved data quality thanks to a new data pipeline
At Royal HaskoningDHV, the number of requests from customers with Data Engineering issues continue to climb. The new department they have set up for this, is growing. So they asked us to temporarily offer their Data Engineering team more capacity. One of the issues we offered help with involved the Aa en Maas Water Authority.
A well-organised data infrastructure
FysioHolland is an umbrella organisation for physiotherapists in the Netherlands. A central service team relieves therapists of additional work, so that they can mainly focus on providing the best care. In addition to organic growth, FysioHolland is connecting new practices to the organisation. Each of these has its own systems, work processes and treatment codes. This has made FysioHolland's data management large and complex.
A scalable machine-learning platform for predicting billboard impressions
The Neuron provides a programmatic bidding platform to plan, buy and manage digital Out-Of-Home ads in real-time. They asked us to predict the number of expected impressions for digital advertising on billboards in a scalable and efficient way.
Measurable impact on social change using a data lake
RNW Media is an NGO that focuses on countries where there is limited freedom of expression. The organisation tries to make an impact through online channels such as social media and websites. To measure that impact, RNW Media drew up a Theory of Change (a kind of KPI framework for NGOs).
The proven added value for Fietsvoordeelshop
Fietsvoordeelshop has seven physical stores and a webshop. It is one of the most successful bike shops at the moment. In the field of data, the bicycle shop mainly worked with Excel. This means that a lot was done manually, every week. Due to the growth of the organisation and the increasing number of processes, the Excel file became increasingly large and unclear. Fietsvoordeelshop asked us to demonstrate with a data pressure cooker of five days that data-driven work could be of added value.
Q&A about Data Engineering
Read how we approach projects and what our vision of the modern Data Engineer is.schedule an online meeting
Trust your data
Do you want to start collecting data but don't know where to start? Have you already started but need assistance with monitoring your processes and supervising your quality? Or do you feel that the data you collect does not contribute optimally to achieving your goals? Joachim will gladly discuss the solutions with you.
Be inspired about Data Engineering
Bring structure to your data
There are many different forms of data storage. In practice, a (relational) database, a data warehouse, and a data lake are the most commonly used and often confused with each other. In this article, you will read about what they entail and how to use them.
Data quality: the foundation for effective data-driven work
Data projects often need to deliver results quickly. The field is relatively new, and to gain support, it must first prove its value. As a result, many organisations build data solutions without giving much thought to their robustness, often overlooking data quality. What are the risks if your data quality is not in order, and how can you improve it? Find the answers to the key questions about data quality in this article.
The all-round profile of the modern data engineer
Since the field of big data emerged, many elements of the modern data stack became the data engineers' responsibility. What are these elements, and how should you build your data team?
Unlocking the power of Analytics Engineering
The world of data is continuously shifting and so are its corresponding jobs and responsibilities within data teams. With this, an up-and-coming role appeared on the horizon: the Analytics Engineer.
What is machine learning operations (MLOps)?
Bringing machine learning models to production has proven to be a complex task in practice. MLOps assists organisations that want to develop and maintain models themselves in ensuring the quality and continuity. Read this article and get answers to the most frequently asked questions on this topic.
What is a data architecture?
Working in a data-driven way helps you make better decisions. The better your data quality, the more you can rely on it. A good data architecture is a basic ingredient for data-driven working. In this article, we explain what a data architecture is and what a Data Architect does.
The foundation for Data Engineering: solid data pipelines
Basically, Data Engineers work on data pipelines. These are data processes that can retrieve data from a certain place and write it in somewhere. In this article you can read more about how data pipelines work and discover why they are so important for a solid data infrastructure.
What is a Data Engineer?
A Data Engineer is someone who feels completely at home in seas of data and the technical engineering required to convert that data into meaningful signals and insights.
Why do I need Data Engineers when I have Data Scientists?
It is now clear to most companies: data-driven decisions by Data Science add concrete value to business operations. Whether your goal is to build better marketing campaigns, perform preventive maintenance on your machines or fight fraud more effectively, there are applications for Data Science in every industry.
Implementing a data platform
Based on our know-how, the purpose of this blog is to transmit our knowledge and experience to the community by describing guidelines for implementing a data platform in an organization. We understand that the specific needs of every organization are different, that they will have an impact on the technologies used and that a single architecture satisfying all of them makes no sense. So, in this blog we will keep it as general as we can.
5 reasons to use Infrastructure as Code (IaC)
Infrastructure as Code has proven itself as a reliable technique for setting up platforms in the cloud. However, it does require an additional investment of time from the developers involved. In which cases does the extra effort pay off? Find out in this article.
Setting up Azure App functions
In the article, we start by discussing Serverless Functions. Then we demonstrate how to use Terraform files to simplify the process of deploying a target infrastructure, how to create a Function App in Azure, the use GitHub workflows to manage continuous integration and deployment, and how to use branching strategies to selectively deploy code changes to specific instances of Function Apps.
AWS (Amazon Web Services) vs GCP (Google Cloud Platform) for Apache Airflow
This article provides a comparison between these two managed services Cloud Composer & MWAA. This will help you understand the similarities, differences, and factors to consider when choosing them. Note that there are other good options when it comes to hosting a managed airflow implementation, such as the one offered by Microsoft Azure. The two being compared in this article are chosen due to my hands-on experience using both managed services and their respective ecosystems.
Kubernetes-based event-driven autoscaling with KEDA: a practical guide
This article explains the essence of Kubernetes Event Driven Autoscaling (KEDA). Subsequently, we configure a local development environment enabling the demonstration of KEDA using Docker and Minikube. Following this, we expound upon the scenario that will be implemented to showcase KEDA, and we guide through each step of this scenario. By the end of the article, you will have a clear understanding of what KEDA entails and how they can personally implement an architecture with KEDA.