Your Data Engineering partner
Generate reliable and meaningful insights from a solid, secure and scalable infrastructure. Our team of 25+ Data Engineers is ready to implement, maintain and optimise your data products and infrastructure end-to-end.
Find the solution that fits your needs
Data warehouse migration or implementation
Set up a data warehouse or data lake in the cloud. Our technical specialists build the right solution for your organisation. We create a plan to integrate all the sources you have and connect your existing data products.
Data governance & data architecture
Rely on the experience of our architects and developers and apply best practices in data governance and architecture. We implement data quality and other governance processes in new platforms. We also offer help in the form of advice and implementations of already existing platforms.
Machine learning operations
Our data science and engineering specialists help you bring machine learning into production. Turn one-off insights into structural added value from your data and guarantee the quality of your model output.
AI Document Explorer
Improve your work efficiency with our AI Document Explorer. Streamline your work by quickly finding answers and accessing your documents, all in one secure place. Take the step towards working more efficiently and easily!
Data Engineering challenges we can solve for you
Fast and reliable internal information using AI Document Explorer
Financial institutions need to process large amounts of documentation. For this particular institution, an internal team facilitates this by, for example, creating summaries using text analysis and natural language processing (NLP). They make these available to the various business units. To conduct audits more efficiently, they wanted to develop a question-and-answer model to get the right information to them faster. When ChatGPT was launched, they asked us to create a proof of concept.
Working more efficiently thanks to migration to Databricks
The Kadaster manages complex (geo)data, including all real estate in the Netherlands. All data is stored and processed using an on-premise data warehouse in Postgres. They rely on an IT partner for maintaining this warehouse. The Kadaster aims to save costs and work more efficiently by migrating to a Databricks environment. They asked us to assist in implementing this data lakehouse in the Microsoft Azure Cloud.
Converting billions of streams into actionable insights with a new data & analytics platform
Merlin is the largest digital music licensing partner for independent labels, distributors, and other rightsholders. Merlin’s members represent 15% of the global recorded music market. The company has deals in place with Apple, Facebook, Spotify, YouTube, and 40 other innovative digital platforms around the world for its’ member’s recordings. The Merlin team tracks payments and usage reports from digital partners while ensuring that their members are paid and reported to accurately, efficiently, and consistently.
20% fewer complaints thanks to data-driven maintenance reports
An essential part of Otis's business operations is the maintenance of their elevators. To time this effectively and proactively inform customers about the status of their elevator, Otis wanted to implement continuous monitoring. They saw great potential in predictive maintenance and remote maintenance.
Insight into the complete sales funnel thanks to a data warehouse with dbt
Our consultants log the assignments they take on for our clients in our ERP system AFAS. In our CRM system HubSpot, we can see all the information relevant before signing a collaboration agreement. When we close a deal, all the information from HubSpot automatically transfers to AFAS. So, HubSpot is mainly used for the process before entering a collaboration, while AFAS is used for the subsequent phase. To tighten our people's planning and improve our financial forecasts, we decided to set up a data warehouse to integrate data from both data sources.
Valuable insights from Microsoft Dynamics 365
Agrico is a cooperative of potato growers. They cultivate potatoes for various purposes such as consumption and planting future crops. These potatoes are exported worldwide through various subsidiaries. All logistical and operational data is stored in their ERP system, Microsoft Dynamics 365. Due to the complexity of this system with its many features, the data is not suitable for direct use in reporting. Agrico asked us to help make their ERP data understandable and develop clear reports.
A standardised way of processing data using dbt
One of the largest online shops in the Netherlands wanted to develop a standardised way of data processing within one of its data teams. All data was stored in the scalable cloud data warehouse Google BigQuery. Large amounts of data were available within this platform regarding orders, products, marketing, returns, customer cases and partners.
Reliable reporting using robust Python code
The National Road Traffic Data Portal (NDW) is a valuable resource for municipalities, provinces, and the national government to gain insight into traffic flows and improve infrastructure efficiency.
Setting up a future-proof data infrastructure
Valk Exclusief is a chain of 4-star+ hotels with 43 hotels in the Netherlands. The hotel chain wants to offer guests a personal experience, both in the hotel and online.
A scalable data platform in Azure
TM Forum, an alliance of over 850 global companies, engaged our company as a data partner to identify and solve data-related challenges.
A fully automated data import pipeline
Stichting Donateursbelangen aims to strengthen trust between donors and charities. They believe that that trust is based on collecting money honestly, openly, transparently and respectfully. At the same time effectively using the raised donation funds to make an impact. To further this goal, Stichting Donateursbelangen wants to share information about charities with donors through their own search engine.
Central data storage with a new data infrastructure
Dedimo is a collaboration of five mental healthcare initiatives. In order to continuously enhance the quality of their care, they organize internal processes more efficiently. Therefore, they use perceptions from the data that is internally available. Previously, they acquired the data themselves from different source systems with ad hoc scripts. They requested our help to make this process more robust, efficient and to further professionalise it. They asked us to facilitate the central storage of their data, located in a cloud data warehouse. The goal was to set up the data infrastructure within this environment, since they were already used to working with Google Cloud Platform (GCP).
Improved data quality thanks to a new data pipeline
At Royal HaskoningDHV, the number of requests from customers with Data Engineering issues continue to climb. The new department they have set up for this, is growing. So they asked us to temporarily offer their Data Engineering team more capacity. One of the issues we offered help with involved the Aa en Maas Water Authority.
A well-organised data infrastructure
FysioHolland is an umbrella organisation for physiotherapists in the Netherlands. A central service team relieves therapists of additional work, so that they can mainly focus on providing the best care. In addition to organic growth, FysioHolland is connecting new practices to the organisation. Each of these has its own systems, work processes and treatment codes. This has made FysioHolland's data management large and complex.
A scalable machine-learning platform for predicting billboard impressions
The Neuron provides a programmatic bidding platform to plan, buy and manage digital Out-Of-Home ads in real-time. They asked us to predict the number of expected impressions for digital advertising on billboards in a scalable and efficient way.
Measurable impact on social change using a data lake
RNW Media is an NGO that focuses on countries where there is limited freedom of expression. The organisation tries to make an impact through online channels such as social media and websites. To measure that impact, RNW Media drew up a Theory of Change (a kind of KPI framework for NGOs).
Q&A about Data Engineering
Read how we approach projects and what our vision of the modern Data Engineer is.
schedule an online meetingOur team of over 15 Data Engineers* is capable of implementing complex solutions. At the same time, we have the flexibility to meet your specific needs. We build a bespoke solution that seamlessly integrates with your current infrastructure in Microsoft Azure, Google Cloud Platform (GCP), or Amazon Web Services (AWS).
Our Data Engineering consultants share their knowledge with you and with each other. This way, you can rely on the expertise of a whole team of specialists with knowledge in areas such as MLOps, DevOps, data warehousing, and infrastructure.
It is essential that you can use your new platform, tools, or insights correctly. Therefore, we ensure the right knowledge transfer in every project and can also provide training when needed.
*In the market, the overarching term 'Data Engineer' is still widely used. We see the trend towards more specific profiles, and they are also part of our team of technical specialists. Discover our perspective on this.
Resolving your data challenge in project form, with a deadline and a clear budget, is possible. Would you prefer more flexibility? You can also hire our consultants on a flexible basis.
Do you want to elevate the knowledge of your own staff? Our Data Academy offers subject-specific training and inspirational sessions to help you increase support within your organisation.
We build customised solutions that integrate seamlessly with your current infrastructure in Microsoft Azure, Google Cloud Platform (GCP) or Amazon Web Services (AWS).
To align as closely as possible with your organisation and requirements, we build a custom solution for you. We adapt our approach accordingly. Here's an overview of what you can expect:
- Our technical specialists engage in discussions about your current infrastructure. We discuss your needs and technical requirements. If you're not yet certain about your exact requirements, our research team can assist you in defining your problem. We can also help shape your data strategy initially.
- Once your challenge is defined, you will receive a technical proposal for a custom solution that seamlessly integrates with your existing processes.
- In close collaboration with your team, we handle the implementation from A to Z. If you prefer, one of our project managers can handle coordination and scheduling. Comprehensive documentation and knowledge transfer are naturally included.
We'd be happy to discuss the approach that suits you best.
Considering contacting us? We can imagine that you'd like to know what is coming next.
One of our Business Managers will guide you through 6 steps. Read the article.
Trust your data
Do you want to start collecting data but don't know where to start? Have you already started but need assistance with monitoring your processes and supervising your quality? Or do you feel that the data you collect does not contribute optimally to achieving your goals? Joachim will gladly discuss the solutions with you.
Business Manager+31(0)20 308 43 90+31(0)6 23 59 83 71joachim.vanbiemen@digital-power.com
Be inspired about Data Engineering
Low-code/no-code or custom coding?
Years ago, you couldn't develop an application or process without knowledge of complex programming languages like Javascript, PHP, and Python. You needed a programmer or Data Engineer. Today, there is a shortage of technical experts, while more and more low-code solutions are appearing on the market. These tools allow you to get started without in-depth technical knowledge. Whether this is the right solution for you depends on various factors. Make the right decision with the help of this article.
The organisational benefits of implementing your own AI-chatbot
With the increasing availability of cloud services that enable companies to leverage Large Language Models, it becomes relatively easy to setup your own GPT-model. However, one important question needs to be answered before you start building: what are the benefits for my organisation?
How does the AI Document Explorer work in practice?
The AI Document Explorer (AIDE) is a cloud solution developed by Digital Power that utilises OpenAI's GPT model. It can be deployed to quickly gain insights into company documents. AIDE securely indexes your files, enabling you to ask questions about your own documents. Not only does it provide you with the answers you are looking for, but it also references the locations where these answers are found.
Bring structure to your data
There are many different forms of data storage. In practice, a (relational) database, a data warehouse, and a data lake are the most commonly used and often confused with each other. In this article, you will read about what they entail and how to use them.
Data quality: the foundation for effective data-driven work
Data projects often need to deliver results quickly. The field is relatively new, and to gain support, it must first prove its value. As a result, many organisations build data solutions without giving much thought to their robustness, often overlooking data quality. What are the risks if your data quality is not in order, and how can you improve it? Find the answers to the key questions about data quality in this article.
The all-round profile of the modern data engineer
Since the field of big data emerged, many elements of the modern data stack became the data engineers' responsibility. What are these elements, and how should you build your data team?
Unlocking the power of Analytics Engineering
The world of data is continuously shifting and so are its corresponding jobs and responsibilities within data teams. With this, an up-and-coming role appeared on the horizon: the Analytics Engineer.
What is machine learning operations (MLOps)?
Bringing machine learning models to production has proven to be a complex task in practice. MLOps assists organisations that want to develop and maintain models themselves in ensuring the quality and continuity. Read this article and get answers to the most frequently asked questions on this topic.
What is a data architecture?
Working in a data-driven way helps you make better decisions. The better your data quality, the more you can rely on it. A good data architecture is a basic ingredient for data-driven working. In this article, we explain what a data architecture is and what a Data Architect does.
The foundation for Data Engineering: solid data pipelines
Basically, Data Engineers work on data pipelines. These are data processes that can retrieve data from a certain place and write it in somewhere. In this article you can read more about how data pipelines work and discover why they are so important for a solid data infrastructure.
What does a (Cloud) Data Engineer do versus a Machine Learning Engineer?
In the world of data and technology, Data Engineers and Machine Learning Engineers are crucial players. Both roles are essential for designing, building, and maintaining modern data infrastructures and advanced machine learning (ML) applications. In this blog, we focus specifically on the roles and responsibilities of a Data Engineer and Machine Learning Engineer.
Why do I need Data Engineers when I have Data Scientists?
It is now clear to most companies: data-driven decisions by Data Science add concrete value to business operations. Whether your goal is to build better marketing campaigns, perform preventive maintenance on your machines or fight fraud more effectively, there are applications for Data Science in every industry.
Implementing a data platform
Based on our know-how, the purpose of this blog is to transmit our knowledge and experience to the community by describing guidelines for implementing a data platform in an organisation. We understand that the specific needs of every organisation are different, that they will have an impact on the technologies used and that a single architecture satisfying all of them makes no sense. So, in this blog we will keep it as general as we can.
5 reasons to use Infrastructure as Code (IaC)
Infrastructure as Code has proven itself as a reliable technique for setting up platforms in the cloud. However, it does require an additional investment of time from the developers involved. In which cases does the extra effort pay off? Find out in this article.
Setting up Azure App functions
In the article, we start by discussing Serverless Functions. Then we demonstrate how to use Terraform files to simplify the process of deploying a target infrastructure, how to create a Function App in Azure, the use GitHub workflows to manage continuous integration and deployment, and how to use branching strategies to selectively deploy code changes to specific instances of Function Apps.
AWS (Amazon Web Services) vs GCP (Google Cloud Platform) for Apache Airflow
This article provides a comparison between these two managed services Cloud Composer & MWAA. This will help you understand the similarities, differences, and factors to consider when choosing them. Note that there are other good options when it comes to hosting a managed airflow implementation, such as the one offered by Microsoft Azure. The two being compared in this article are chosen due to my hands-on experience using both managed services and their respective ecosystems.
Kubernetes-based event-driven autoscaling with KEDA: a practical guide
This article explains the essence of Kubernetes Event Driven Autoscaling (KEDA). Subsequently, we configure a local development environment enabling the demonstration of KEDA using Docker and Minikube. Following this, we expound upon the scenario that will be implemented to showcase KEDA, and we guide through each step of this scenario. By the end of the article, you will have a clear understanding of what KEDA entails and how they can personally implement an architecture with KEDA.