11/12/2020

Article 8: The Top 4 Solar Asset Monitoring Challenges—and What You Can Do About Them

Overcoming the Model Selection Challenge

Article 8: The Top 4 Solar Asset Monitoring Challenges—and What You Can Do About Them

by Power Factors

Overcoming the Model Selection Challenge

By Steve Hanawalt

The Model Selection Challenge

As we discussed in the fourth article in this series, the purpose of a solar monitoring system is to characterize the operational performance of the plant’s assets so we can ensure the equipment is performing well. For the solar asset class, this is particularly difficult because the electric generators of large-scale solar systems (modules) are not metered, making modeling the health of the DC array equipment uniquely challenging.

One Size Does Not Fit All

The information technology world is currently abuzz about the potential of big data and advanced analytics — and for good reason. With a wealth of plant operating data to be mined and the promise of identifying (or even predicting) anomalous plant behavior, it would be a shame to let this data go to waste. In addition, simple performance models are limited when it comes to identifying problems in the solar power asset class.

At Power Factors, we have found that general-purpose advanced analytic performance models do not work well for the solar asset class. Why? The reason has to do with two characteristics of the solar power asset class that distinguish it from other power asset classes:

  1. Solar power assets are not metered at the source of electric generation
  2. Solar power facilities are unmanned and geographically distributed

General-purpose machine learning, artificial intelligence (AI) and neural-network-based performance models don’t need to know anything about the asset they’re monitoring other than what can be learned by observation. These algorithms aren’t dependent on a physical model of the equipment being monitored.

In other words, whether the machine learning algorithm is monitoring a commercial refrigerator or a solar DC array doesn’t really matter. The algorithm simply is trained to understand the performance signature of a well-behaving refrigerator or DC array. When that performance signature varies by a statistically significant amount, an alert is triggered.

But most solar plants have no metering in the DC array. Because there is no metering at the solar generator (module) level, the general-purpose algorithms can’t provide detailed insights into the specific location of the problem.

The generic performance model can communicate things like, “Block 2 and 5 are not running as well as they used to.” It can’t communicate things like, “There are 7 open circuit strings in Block 2 and module soiling in Block 5.” Adding diagnostic, asset-level detail to the alert requires a monitoring platform with a deep knowledge of PV failure modes designed into it.

But for the solar power industry, summary-level alerts of plant performance losses are not good enough. Why? Because of solar asset class distinction number two: Solar power facilities are often not staffed and geographically distributed. Investigating the summary-level alert in the field would require a costly truck roll.

All other power generation asset classes have personnel on site or nearby to perform this task. If a performance shortfall investigation needs to be done, that task is merely added to the daily operating rounds of the local technicians with no incremental cost to the owner. But the solar asset class can’t cost justify rolling a truck every time the performance monitoring system thinks there is a problem with the equipment.

In our experience, successful solar power advanced analytics need to be developed by industry professionals and utilize asset-specific performance models. This creates trustworthy performance insights with enough detail about the problem to make cost-effective truck roll decisions and efficiently correct the problem in the field.

There’s No Direct Flight to Performance Insights

As I’ve discussed in a previous article, when it comes to the solar asset class, there are no direct flights from data consumption by general-purpose performance models to trustworthy actionable insights. Much of the heavy lifting in solar performance monitoring happens at the data foundation level.

Before we can “tease out” actionable insights at the DC array equipment level we need to meticulously prepare the data. Power Factors’ Drive Pro asset performance management (APM) solution performs hundreds of data capability and data validation tests before analyzing the performance signature of DC array equipment. Only after this robust set of data qualification tests are applied to the raw operating data can auto-classification of the event signatures be determined.

Without this scalable and robust data preparation step, Drive Pro, like general-purpose performance models, would only be able to detect high-level performance anomalies in the DC array. In our many conversations with solar plant owners and operators, they have communicated that they need a tool that not only tells them there is a problem, but also gives them trusted information about impact and location.

What is the Best Model?

When I ask our data scientists, performance engineers and product managers what the best performance model is for identifying solar DC array problems, their answer is always something along the lines of, “It depends on what you are looking for.” We have found that no one algorithm or method is a silver bullet for detecting all types of PV power equipment performance problems.

Because of this, Drive Pro uses a combination of physical, empirical, statistical and machine learning performance models and algorithms to detect all types of losses in the DC array. Sometimes a simple data regression is the best tool for the job, other times digital twins and physical models are the best tools to apply. When I asked our senior data scientist if he could solve the problem of PV performance monitoring using a general-purpose algorithm, he said “Probably, but we wouldn’t have been able to skip all of the steps we have used as subject matter experts (SMEs) to process, classify and filter the data prior to consuming it in the algorithm.”

My conclusion, based on these and many other discussions with solar SMEs as well as my years of monitoring power generation equipment, is that the most important thing in performance monitoring is not just the tool, but who built it. I see no way around this problem. The unique characteristics of the solar power asset class requires deep expertise in the technology itself to model and detect performance problems in its generating equipment.

Therefore, I see only three viable solutions to the solar power performance model problem:

  1. Build – Depending on what tools you use to create your own performance model and monitoring solution, this is a few to several years’ journey requiring a team of dedicated experts.
  2. Buy Generic – If you choose to develop a monitoring solution using a general-purpose advanced analytic engine, you will either need to build the platform from the ground up or use a system integrator. This is a 2-3-year effort with high technology risk and a large investment.
  3. Buy Purpose-Built – This solution is developed and maintained by industry experts, is offered as a cancellable subscription and the value can be demonstrated prior to purchase.

Summary

If the model selection problem is not addressed properly, your monitoring system will likely fail under the weight of the solar power data tsunami. It has been our experience that a one-size-fits-all generic-model approach to the solution will fall short. Because of the unique nature of the solar power asset class, the best path to a successful asset performance management platform is one that is purpose-built by industry professionals.

Steve Hanawalt is an EVP and Co-Founder at Power Factors. To learn more about Drive Pro, check out these recent webinar recordings with Power Factors’ SMEs.

Back to news & insights