In the first part of our mini series on digital twins in the energy sector, we took a look at the current situation in power distribution grid, while the second part focused on the potentials of the digital twin technology to ensure smooth grid operations even in times of the renewable energy sources and skills shortages. Now let's get a bit more specific.
We have talked with our own experts who oversee grid digitalization projects for distribution system operators all over Europe in order to collect some practical tips and recommendations. Specifically, we wanted to know: As a grid operator, where should I start if I want to create a digital twin of my own distribution network? What type of data plays an important role and how high must the quality of these data be? What are the steps towards a dynamic virtual copy of grids?
Now we'd like to share with you their answers in this third and last part of our series.
In principle, the digital twin technology can be implemented for various purposes. In the energy sector specifically, we have identified so far three main directions:
The first area of application deals mainly with condition monitoring and predictive maintenance for individual assets, for example transformers. The virtual copy collects sensor and other data to replicate each aspect of operation. For one, this allows one to monitor from afar when there is a risk of overheating in transformer oil, which can damage the asset.
In the second case one would use a digital twin to plan construction work. The digital twin technology helps create exact virtual copies of streets or woodland of the areas where a grid operator has planned, for example, to build new power lines and substations. This allows not only to better plan for construction but also to evaluate costs and risks much more accurately and at a very early stage.
Last but not least, the third direction is exactly the area of application that we are going to dive into in more detail further down below. This type of a digital twin is used to optimize the grid operations and make short-term as well as long-term grid planning much easier.
Nowadays, it is safe to say that everything starts with data. The same is true for the creation of a dynamic virtual grid model (= grid digital twin) - any preparation work begins with data collection.
Specifically with distribution grids in mind, we need information that allows us to understand what happens in the grid and where there is need for action - for example, for grid reinforcement measures.
We divide such data into three main categories:
We can generally say that we need to include as many data sources as possible in order to build an exact replica of distribution grids. In the majority of cases we talk about the existing software systems that a respective distribution system operator has already in place: geographic information system (GIS), assets information system, ERP, SCADA, etc.
Due to the increasing use of smart meters, the meter data management systems (MDM) are also becoming more relevant in order to be able to better assess the current network status. For the digital twin, MDM can, therefore, provide further valuable information about the network participants.
Ultimately, we need an enormous variety of data: switching states, assets data, technical parameters, information about the feed-ins and much more. Whether all that data is in just a handful of systems or scattered around in local databases and Excel spreadsheets is less important.
Now, after we have identified the relevant data sources, the next step is…
A major challenge for distribution system operators today is that although they already have an enormous pool of valuable information about their power grid, this data often sits in silos across multiple IT systems. To be able to create a true digital twin of the distribution grid, you need to aggregate this data; ideally, through API-based connectors. Alternatively, you can use file-based data transfer.
From our experience, there are three main aspects that one needs to pay close attention to:
From our point of view, a final evaluation of data quality should take place only when data is aggregated. Each single dataset can seem error-free in itself; however, it is only when you put all datasets in the context of a power grid model that all inconsistencies and errors come bubbling to the surface.
That doesn’t mean, though, that you can’t check beforehand if datasets are actually complete and information is plausible. With this in mind, we approach quality evaluation from two angles.
"The quality of our data is so poor; we probably shouldn't even think about creating and deploying a dynamic grid model until it's improved". One hears this argument quite often.
Indeed, it is very often the case that the DSOs we speak to wish their data had much higher quality than it is now. However, this is not a good enough reason not to explore the potential of digital twin technology.
Quite the contrary: Only when all data "lives" in a consistent grid model that is available to everyone and is always up-to-date can you steadily and systematically improve its quality.
Because knowing that your data is not at an optimal level of accuracy is not enough. You also need to know where the need for improvement exactly is, and considering the massive amount of data sitting across multiple systems, that's not an easy task.
However, when data is aggregated and linked together in a shared context, you can assess its quality in a more structured way and actually improve it in a focused manner.
It might sound extreme, but when it comes to a "live" grid model, 100 percent data quality is impossible to achieve. The distribution grid changes very quickly nowadays. This in turn means that the grid model continuously receives new data from the source systems, which may contain errors or information gaps.
With this in mind, it's important to establish certain processes and methods that help ensure a high level of data quality long-term; and not only establish such processes, but actually follow them rigorously.
Still, while the latest data batch has been evaluated and cleaned up, there is a next batch waiting which is potentially not error-free either. And so on and so forth. For this reason, striving for 100 percent data quality is unrealistic.
Based on the experience of our customers, however, we can say that a 90 percent quality can be achieved quite quickly; as a medium-term goal, one can aim for a value of slightly over 95 percent.
Now that we've gone through the steps required for creating a digital twin of the distribution grids, let's briefly review what areas of grid operation we've seen the digital twin technology to have the fastest positive impact on.
Distribution system operators have generally reported that a digital twin of their grid helps them have a much better understanding of the current performance of their grid. Such a comprehensive, "live" grid model allows, for example, to see right away if there is enough hosting capacity in a specific area to integrate further PV systems.
This in turn allows us to make well-founded decisions in which area the power grid must be expanded next. Nowadays, one can no longer rely on the traditional reference values and fixed planning principles. The increasing usage of decentralized power generators and consumers makes it significantly more challenging to identify short-term measures for the increase of the grid hosting capacity as well as to plan strategic grid expansion and reinforcement measures.
However, if you have a good data basis at your disposal - the one where as many pieces of information as possible come together - you can carry out a much more granular and comprehensive data analysis for an accurate power supply forecasting. As a result, distribution system operators can not only ensure reliable grid operations but also save financial resources by making sound investment decisions. For instance, in order to reinforce the grid areas that will only become critical in two years.
Moreover, a grid digital twin enables process digitalization and process automation at a level that cannot be achieved without a complete and, more importantly, consistent electrical grid model.
Grid connection check is one example. In order to be able to automate individual steps such as calculation of the best possible connection points or performing a grid integration study, one has to ensure that certain grid simulations can be carried out automatically, too.
This in turn means that the basis for such simulations - i.e. a dynamic digital grid model that is up-to-date - already exists and does not require any manual aggregation of data. Because as soon as there is manual input, by definition it can no longer be considered as automation. In order to build such a (dynamic + digital + always up-to-date) power grid model, one will inevitably have to use digital grid technology.
Perhaps it sounds familiar to you, too: One department creates its own grid model on the basis of the data master system in order to carry out grid connection checks. And then maybe six months later, another department does the same, only this time to plan grid reinforcement measures. Now there are two different versions of a grid model, which on top of that "live" locally in the computers of individual colleagues.
Or one has two data master systems that are used for different purposes. Ideally, these systems are maintained in parallel, so that all colleagues can work on the same data basis. However, the reality is often very different.
With a digital twin, on the other hand, you have a consistent data basis for decision-making. The grid model "lives" online and is, therefore, available to every colleague at any time as a single source of truth. This way, it can also be continuously updated whenever there are any changes.
This means that all decisions that the distribution system operator has to make in order to ensure reliable grid operations are based on a universal database all relevant stakeholders have access to.
This project is supported by the German Federal Ministry for Economic Affairs and Climate Action as part of the Renewable Energy Solutions Programme of the German Energy Solutions Initiative.
The German Energy Agency (dena) is a centre of excellence for the applied energy transition and climate protection. dena studies the challenges of building a climate-neutral society and supports the German government in achieving its energy and climate policy objectives. Since its foundation in 2000, dena has worked to develop and implement solutions and bring together national and international partners from politics, industry, the scientific community and all parts of society. dena is a project enterprise and a public company owned by the German federal government. dena's shareholders are the Federal Republic of Germany and the KfW Group.
www.dena.de/en
With the aim of positioning German technologies and know-how worldwide, the German Energy Solutions Initiative of the Federal Ministry of Economics and Climate Action (BMWK) supports suppliers of climate-friendly energy solutions in opening up foreign markets. The focus lies on renewable energies, energy efficiency, smart grids and storage, as well as technologies such as power-to-gas and fuel cells. Aimed in particular at small and medium-sized enterprises, the German Energy Solutions Initiative supports participants through measures to prepare market entry as well as to prospect, develop and secure new markets.
www.german-energy-solutions.de/en
With the RES programme, the Energy Export Initiative of the Federal Ministry of Economics and Climate Action (BMWK) helps German companies in the renewable energy and energy efficiency sectors enter new markets. Within the framework of the programme, reference plants are installed and marketed with the support of the German Energy Agency (dena). Information and training activities help ensure a sustainable market entry and demonstrate the quality of climate-friendly technologies made in Germany.
https://www.german-energy-solutions.de/GES/Redaktion/EN/Basepages/Services/dena-res.html