打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
Black-Box Method

Black-Box Method

black box methods are better suited when the total building energy consumption needs to be benchmarked Since the achievable benchmarking accuracy depends a lot on the experience of the modeller, the modeller himself/herself should be another key factor in choosing which method to use.

From: Applied Energy, 2014

Related terms:

Program Design and Analysis

Marilyn Wolf, in Computers as Components (Third Edition), 2012

5.10 Program Validation and Testing

Complex systems need testing to ensure that they work as they are intended. But bugs can be subtle, particularly in embedded systems, where specialized hardware and real-time responsiveness make programming more challenging. Fortunately, there are many available techniques for software testing that can help us generate a comprehensive set of tests to ensure that our system works properly. We examine the role of validation in the overall design methodology in Section 9.6. In this section, we concentrate on nuts-and-bolts techniques for creating a good set of tests for a given program.

The first question we must ask ourselves is how much testing is enough. Clearly, we cannot test the program for every possible combination of inputs. Because we cannot implement an infinite number of tests, we naturally ask ourselves what a reasonable standard of thoroughness is. One of the major contributions of software testing is to provide us with standards of thoroughness that make sense. Following these standards does not guarantee that we will find all bugs. But by breaking the testing problem into subproblems and analyzing each subproblem, we can identify testing methods that provide reasonable amounts of testing while keeping the testing time within reasonable bounds.

We can use various combinations of two major types of testing strategies:

  • ·

  • Black-box methods generate tests without looking at the internal structure of the program.

  • ·

  • Clear-box (also known as white-box) methods generate tests based on the program structure.

In this section we cover both types of tests, which complement each other by exercising programs in very different ways.

5.10.1 Clear-Box Testing

The control/data flow graph extracted from a program's source code is an important tool in developing clear-box tests for the program. To adequately test the program, we must exercise both its control and data operations.

In order to execute and evaluate these tests, we must be able to control variables in the program and observe the results of computations, much as in manufacturing testing. In general, we may need to modify the program to make it more testable. By adding new inputs and outputs, we can usually substantially reduce the effort required to find and execute the test. No matter what we are testing, we must accomplish the following three things in a test:

  • ·

  • Provide the program with inputs that exercise the test we are interested in.

  • ·

  • Execute the program to perform the test.

  • ·

  • Examine the outputs to determine whether the test was successful.

Example 5.11 illustrates the importance of observability and controllability in software testing.

Example 5.11

Controlling and Observing Programs

Let's first consider controllability by examining the following FIR filter with a limiter:

firout = 0.0; /* initialize filter output */

/* compute buff*c in bottom part of circular buffer */

for (j = curr, k = 0; j < N; j++, k++)

 firout += buff[j] * c[k];

/* compute buff*c in top part of circular buffer */

for (j = 0; j < curr; j++, k++)

 firout += buff[j] * c[k];

/* limit output value */

if (firout > 100.0) firout = 100.0;

if (firout < −100.0) firout = −100.0;

The above code computes the output of an FIR filter from a circular buffer of values and then limits the maximum filter output (much as an overloaded speaker will hit a range limit). If we want to test whether the limiting code works, we must be able to generate two out-of-range values for firout: positive and negative. To do that, we must fill the FIR filter's circular buffer with N values in the proper range. Although there are many sets of values that will work, it will still take time for us to properly set up the filter output for each test.

This code also illustrates an observability problem. If we want to test the FIR filter itself, we look at the value of firout before the limiting code executes. We could use a debugger to set breakpoints in the code, but this is an awkward way to perform a large number of tests. If we want to test the FIR code independent of the limiting code, we would have to add a mechanism for observing firout independently.

Being able to perform this process for a large number of tests entails some amount of drudgery, but that drudgery can be alleviated with good program design that simplifies controllability and observability.

The next task is to determine the set of tests to be performed. We need to perform many different types of tests to be confident that we have identified a large fraction of the existing bugs. Even if we thoroughly test the program using one criterion, that criterion ignores other aspects of the program. Over the next few pages we will describe several very different criteria for program testing.

Execution paths

The most fundamental concept in clear-box testing is the path of execution through a program. Previously, we considered paths for performance analysis; we are now concerned with making sure that a path is covered and determining how to ensure that the path is in fact executed. We want to test the program by forcing the program to execute along chosen paths. We force the execution of a path by giving it inputs that cause it to take the appropriate branches. Execution of a path exercises both the control and data aspects of the program. The control is exercised as we take branches; both the computations leading up to the branch decision and other computations performed along the path exercise the data aspects.

Is it possible to execute every complete path in an arbitrary program? The answer is no, because the program may contain a while loop that is not guaranteed to terminate. The same is true for any program that operates on a continuous stream of data, because we cannot arbitrarily define the beginning and end of the data stream. If the program always terminates, then there are indeed a finite number of complete paths that can be enumerated from the path graph. This leads us to the next question: Does it make sense to exercise every path? The answer to this question is no for most programs, because the number of paths, especially for any program with a loop, is extremely large. However, the choice of an appropriate subset of paths to test requires some thought.

Example 5.12 illustrates the consequences of two different choices of testing strategies.

Example 5.12

Choosing the Paths to Test

We have at least two reasonable ways to choose a set of paths in a program to test:

  • ·

  • execute every statement at least once;

  • ·

  • execute every direction of a branch at least once.

These conditions are equivalent for structured programming languages without gotos, but are not the same for unstructured code. Most assembly language is unstructured, and state machines may be coded in high-level languages with gotos.

To understand the difference between statement and branch coverage, consider this CDFG:

We can execute every statement at least once by executing the program along two distinct paths. However, this leaves branch a out of the lower conditional uncovered. To ensure that we have executed along every edge in the CDFG, we must execute a third path through the program. This path does not test any new statements, but it does cause a to be exercised.

How do we choose a set of paths that adequately covers the program's behavior? Intuition tells us that a relatively small number of paths should be able to cover most practical programs. Graph theory helps us get a quantitative handle on the different paths required. In an undirected graph, we can form any path through the graph from combinations of basis paths. (Unfortunately, this property does not strictly hold for directed graphs such as CDFGs, but this formulation still helps us understand the nature of selecting a set of covering paths through a program.) The term “basis set” comes from linear algebra. Figure 5.25 shows how to evaluate the basis set of a graph. The graph is represented as an incidence matrix. Each row and column represents a node; a 1 is entered for each node pair connected by an edge. We can use standard linear algebra techniques to identify the basis set of the graph. Each vector in the basis set represents a primitive path. We can form new paths by adding the vectors modulo 2. Generally, there is more than one basis set for a graph.

Figure 5.25. The matrix representation of a graph and its basis set.

The basis set property provides a metric for test coverage. If we cover all the basis paths, we can consider the control flow adequately covered. Although the basis set measure is not entirely accurate because the directed edges of the CDFG may make some combinations of paths infeasible, it does provide a reasonable and justifiable measure of test coverage.

A simple measure, cyclomatic complexity [McC76], allows us to measure the control complexity of a program. Cyclomatic complexity is an upper bound on the size of the basis set. If e is the number of edges in the flow graph, n the number of nodes, and p the number of components in the graph, then the cyclomatic complexity is given by

(Eq. 5.1)M=e−n+2p

For a structured program, M can be computed by counting the number of binary decisions in the flow graph and adding 1. If the CDFG has higher-order branch nodes, add b − 1 for each b-way branch. In the example of Figure 5.26, the cyclomatic complexity evaluates to 4. Because there are actually only three distinct paths in the graph, cyclomatic complexity in this case is an overly conservative bound.

Figure 5.26. Cyclomatic complexity.

Another way of looking at control flow–oriented testing is to analyze the conditions that control the conditional statements. Consider the following if statement:

if ((a == b) || (c >= d)) { … }

This complex condition can be exercised in several different ways. If we want to truly exercise the paths through this condition, it is prudent to exercise the conditional's elements in ways related to their own structure, not just the structure of the paths through them. A simple condition testing strategy is known as branch testing [Mye79]. This strategy requires the true and false branches of a conditional and every simple condition in the conditional's expression to be tested at least once.

Example 5.13 illustrates branch testing.

Example 5.13

Condition Testing with the Branch Testing Strategy

Assume that the code below is what we meant to write.

if (a || (b >= c)) { printf("OK\n"); }

The code that we mistakenly wrote instead follows:

if (a && (b >= c)) { printf("OK\n"); }

If we apply branch testing to the code we wrote, one of the tests will use these values: a = 0, b = 3, c = 2 (making a false and b >= c true). In this case, the code should print the OK term [0 || (3 >= 2) is true] but instead doesn't print [0 && (3 >= 2) evaluates to false]. That test picks up the error.

Let's consider another more subtle error that is nonetheless all too common in C. The code we meant to write follows:

if ((x == good_pointer) && (x->field1 == 3)) { printf("got the value\n"); }

Here is the bad code we actually wrote:

if ((x = good_pointer) && (x->field1 == 3)) { printf("got the value\n"); }

The problem here is that we typed = rather than ==, creating an assignment rather than a test. The code x = good_pointer first assigns the value good_pointer to x and then, because assignments are also expressions in C, returns good_pointer as the result of evaluating this expression.

If we apply the principles of branch testing, one of the tests we want to use will contain x != good_pointer and x->field1 == 3. Whether this test catches the error depends on the state of the record pointed to by good_pointer. If it is equal to 3 at the time of the test, the message will be printed erroneously. Although this test is not guaranteed to uncover the bug, it has a reasonable chance of success. One of the reasons to use many different types of tests is to maximize the chance that supposedly unrelated elements will cooperate to reveal the error in a particular situation.

Another more sophisticated strategy for testing conditionals is known as domain testing [How82], illustrated in Figure 5.27. Domain testing concentrates on linear inequalities. In the figure, the inequality the program should use for the test is j < = i + 1. We test the inequality with three test points—two on the boundary of the valid region and a third outside the region but between the i values of the other two points. When we make some common mistakes in typing the inequality, these three tests are sufficient to uncover them, as shown in the figure.

Figure 5.27. Domain testing for a pair of values.

A potential problem with path coverage is that the paths chosen to cover the CDFG may not have any important relationship to the program's function. Another testing strategy known as data flow testing makes use of def-use analysis (short for definition-use analysis). It selects paths that have some relationship to the program's function.

The terms def and use come from compilers, which use def-use analysis for optimization [Aho06]. A variable's value is defined when an assignment is made to the variable; it is used when it appears on the right side of an assignment (sometimes called a C-use for computation use) or in a conditional expression (sometimes called P-use for predicate use). A def-use pair is a definition of a variable's value and a use of that value. Figure 5.28 shows a code fragment and all the def-use pairs for the first assignment to a. Def-use analysis can be performed on a program using iterative algorithms. Data flow testing chooses tests that exercise chosen def-use pairs. The test first causes a certain value to be assigned at the definition and then observes the result at the use point to be sure that the desired value arrived there. Frankl and Weyuker [Fra88] have defined criteria for choosing which def-use pairs to exercise to satisfy a well-behaved adequacy criterion.

Figure 5.28. Definitions and uses of variables.

Testing loops

We can write some specialized tests for loops. Because loops are common and often perform important steps in the program, it is worth developing loop-centric testing methods. If the number of iterations is fixed, then testing is relatively simple. However, many loops have bounds that are executed at run time.

Consider first the case of a single loop:

for (i = 0; i < terminate(); i++)

 proc(i,array);

It would be too expensive to evaluate the above loop for all possible termination conditions. However, there are several important cases that we should try at a minimum:

  • 1.

  • Skipping the loop entirely [if possible, such as when terminate() returns 0 on its first call].

  • 2.

  • One-loop iteration.

  • 3.

  • Two-loop iterations.

  • 4.

  • If there is an upper bound n on the number of loop iterations (which may come from the maximum size of an array), a value that is significantly below that maximum number of iterations.

  • 5.

  • Tests near the upper bound on the number of loop iterations, that is, n − 1, n, and n + 1.

We can also have nested loops like this:

for (i = 0; i < terminate1(); i++)

 for (j = 0; j < terminate2(); j++)

 for (k = 0; k < terminate3(); k++)

 proc(i,j,k,array);

There are many possible strategies for testing nested loops. One thing to keep in mind is which loops have fixed versus variable numbers of iterations. Beizer [Bei90] suggests an inside-out strategy for testing loops with multiple variable iteration bounds. First, concentrate on testing the innermost loop as above—the outer loops should be controlled to their minimum numbers of iterations. After the inner loop has been thoroughly tested, the next outer loop can be tested more thoroughly, with the inner loop executing a typical number of iterations. This strategy can be repeated until the entire loop nest has been tested. Clearly, nested loops can require a large number of tests. It may be worthwhile to insert testing code to allow greater control over the loop nest for testing.

5.10.2 Black-Box Testing

Black-box tests are generated without knowledge of the code being tested. When used alone, black-box tests have a low probability of finding all the bugs in a program. But when used in conjunction with clear-box tests they help provide a well-rounded test set, because black-box tests are likely to uncover errors that are unlikely to be found by tests extracted from the code structure. Black-box tests can really work. For instance, when asked to test an instrument whose front panel was run by a microcontroller, one acquaintance of the author used his hand to depress all the buttons simultaneously. The front panel immediately locked up. This situation could occur in practice if the instrument were placed face-down on a table, but discovery of this bug would be very unlikely via clear-box tests.

One important technique is to take tests directly from the specification for the code under design. The specification should state which outputs are expected for certain inputs. Tests should be created that provide specified outputs and evaluate whether the results also satisfy the inputs.

We can't test every possible input combination, but some rules of thumb help us select reasonable sets of inputs. When an input can range across a set of values, it is a very good idea to test at the ends of the range. For example, if an input must be between 1 and 10, 0, 1, 10, and 11 are all important values to test. We should be sure to consider tests both within and outside the range, such as, testing values within the range and outside the range. We may want to consider tests well outside the valid range as well as boundary-condition tests.

Random tests

Random tests form one category of black-box test. Random values are generated with a given distribution. The expected values are computed independently of the system, and then the test inputs are applied. A large number of tests must be applied for the results to be statistically significant, but the tests are easy to generate.

Another scenario is to test certain types of data values. For example, integer-valued inputs can be generated at interesting values such as 0, 1, and values near the maximum end of the data range. Illegal values can be tested as well.

Regression tests form an extremely important category of tests. When tests are created during earlier stages in the system design or for previous versions of the system, those tests should be saved to apply to the later versions of the system. Clearly, unless the system specification changed, the new system should be able to pass old tests. In some cases old bugs can creep back into systems, such as when an old version of a software module is inadvertently installed. In other cases regression tests simply exercise the code in different ways than would be done for the current version of the code and therefore possibly exercise different bugs.

Numerical accuracy

Some embedded systems, particularly digital signal processing systems, lend themselves to numerical analysis. Signal processing algorithms are frequently implemented with limited-range arithmetic to save hardware costs. Aggressive data sets can be generated to stress the numerical accuracy of the system. These tests can often be generated from the original formulas without reference to the source code.

5.10.3 Evaluating Functional Tests

How much testing is enough? Horgan and Mathur [Hor96] evaluated the coverage of two well-known programs, TeX and awk. They used functional tests for these programs that had been developed over several years of extensive testing. Upon applying those functional tests to the programs, they obtained the code coverage statistics shown in Figure 5.29. The columns refer to various types of test coverage: block refers to basic blocks, decision to conditionals, P-use to a use of a variable in a predicate (decision), and C-use to variable use in a nonpredicate computation. These results are at least suggestive that functional testing does not fully exercise the code and that techniques that explicitly generate tests for various pieces of code are necessary to obtain adequate levels of code coverage.

Figure 5.29. Code coverage of functional tests for TeX and awk (after Horgan and Mathur [Hor96]).

Methodological techniques are important for understanding the quality of your tests. For example, if you keep track of the number of bugs tested each day, the data you collect over time should show you some trends on the number of errors per page of code to expect on the average, how many bugs are caught by certain kinds of tests, and so on. We address methodological approaches to quality control in more detail in Chapter 7.

One interesting method for analyzing the coverage of your tests is error injection. First, take your existing code and add bugs to it, keeping track of where the bugs were added. Then run your existing tests on the modified program. By counting the number of added bugs your tests found, you can get an idea of how effective the tests are in uncovering the bugs you haven't yet found. This method assumes that you can deliberately inject bugs that are of similar varieties to those created naturally by programming errors. If the bugs are too easy or too difficult to find or simply require different types of tests, then bug injection's results will not be relevant. Of course, it is essential that you finally use the correct code, not the code with added bugs.

An overview of developmental behavioral genetics

Chloe Austerberry, Pasco Fearon, in Developmental Human Behavioral Epigenetics, 2021

Key interpretative issues

In outlining the twin and adoption methods above, we already touched on a number of key interpretative issues that must always be kept in mind when appraising data from quantitative genetics research. One is so critical that it warrants repeating: as black box methods for estimating the overall contribution of heritable genetic factors to complex traits, these methods say nothing about the underlying mechanisms involved and generally speaking they describe the net result of most likely an exceptionally large number of complex gene-environment processes unfolding at multiple levels of biological and social organization over the course of development. Finding substantial heritability does not imply simple, unmediated, genetic influence on a trait, and many genetic effects may involve substantial environmental mediation (Rutter, 2000). Secondly, the estimates of genetic influence that are obtained from quantitative genetic methods describe the current causes of population differences in a trait, and not the degree to which genetic factors are responsible for a trait in a given individual. Critically, estimates of genetic influence do not imply immutability. A commonly noted example of this is physical height, where a large proportion of the variance within a population tends to be explained by genetics, but despite this, height has increased substantially since the middle of the 19th Century (Fisher, 1919; Lettre, 2011; NCD Risk Factor Collaboration, 2016). Similar arguments apply to the study of the genetics of IQ, which has also seen considerable rises over the last 50 years, despite high heritability. Furthermore, evidence of genetic influence says little if anything about where, in the cascade of developmental events involved, one should focus intervention. The most commonly cited example to illustrate this is phenylketonuria (PKU), which is a genetic condition that leads to the inability to metabolize the amino acid phenylalanine. Untreated, PKU leads to severe damage to the central nervous system, but a comparatively simple environmental intervention—excluding phenylalanine from the diet—entirely prevents any adverse developmental effects, as long as it is introduced shortly after birth. A further, often under-appreciated, interpretative issue concerns the role of GxE. As we noted above, there are significant difficulties in human quantitative genetic studies in properly capturing GxE effects (Dick, 2011), even though most commentators agree that it is highly likely they exist and indeed are prevalent. As a result, it is helpful to be aware of the consequences of ignored GxE, when appraising studies that report genetic “main effects.” In general, in standard modeling, such as that used for twin analyses, ignored gene-by-common environment interactions will be estimated as genetic effects, whereas ignored gene-by-non-shared environment interactions will be estimated as non-shared environment effects. Ignoring GxE can lead to quite dramatic biases in effect estimates (Eaves, 1984).

Building Automation for Energy Efficiency

Mojtaba Navvab, ... Stefano Panzieri, in Handbook of Energy Efficiency in Buildings, 2019

3 Literature Review of Fault Detection Methods

Apart from monitoring the actual state of the building, an important objective of the proposed solution is to detect and identify faults and anomalies. In Ref. [16], a review of methodologies for fault detection and diagnosis (FDD) is presented, creating a tree structure of the different algorithms.

There are three main categories for possible methods: methods based on process history, methods based on qualitative models, and methods based on quantitative models. In the first case, behavioral models are created starting from historical measures from the building. In the second case, qualitative models are made of qualitative relationships from the physical process knowledge. In the third case, quantitative models consist of a set of mathematical relationships based on the physics of the process.

Historical-based methods can be classified as black box or gray box. Black-box methods are based on the estimation of parameters for identifying faults in the system, even if in several cases the physical meaning of the deviation is not known. Gray-box methods are formulated such that parameter estimation used in diagnostics can be physical parameters in the system that control the system itself or the component. In some cases, black-box methods are combined with other algorithms for managing multiple errors or for isolating faults.

Methods based on qualitative models are based on a priori knowledge for understanding the actual state of the system. Among the qualitative model-based methods, there are rule-based and qualitative physics-based algorithms. The method most used in the literature is the rule-based one, which exploits a large set of if-then-else rules and an inferential engine for identifying the actual process state in a predefined set of potential states. Another category is the qualitative physics-based algorithms containing qualitative equations derived from qualitative descriptions of relationships among the process variables or knowledge about the fundamental behavior of the system.

Methods based on quantitative models exploit the mathematical model of the building or of the system, in general, for achieving analytic redundancy for detecting and identifying the cause of malfunctioning. Mathematical equations represent each component of the system and can be solved for simulating the behavior of the system. Quantitative model-based algorithms must be validated with experimental data for understanding the precision of the model and the benefit of the prevision. The quantitative model-based methods need a detailed knowledge of the building. Those models can be classified into detailed physical models or simplified physical models.

In the literature, algorithms based on historical data are more used, while those based on quantitative models are less popular because they need an explicit mathematical model of the system. The black box technique, based on historical data, is the method most used for its simplicity; while the rule-based approach is the most used among qualitative model-based methods.

Another possibility is to combine several methods for improving the efficiency of the single methods and for recognizing simultaneous errors in the algorithm.

In this chapter, we want to fuse information coming from heterogeneous sensors in a unified mathematical model, including also the user behavior as a fundamental variable in the building. From a research point of view, integrating electric consumptions and indoor comfort is challenging and innovative.

State-of-the-Art

Qingsong Ai, ... Sheng Quan Xie, in Advanced Rehabilitative Technology, 2018

2.3 Neural Modeling and Interfaces

Recently, researchers have worked toward using EMG signals to construct an estimation model applied by the operator. This estimation model-based method aims at providing the operator with continuous assistance. Generally speaking, there are two approaches that can be used to design the interactive interface and transform the EMG signals into relevant joint information: black-box method and musculoskeletal model. From the interactive interface of force prediction, the NN is the most widely used in prediction model [74,75], whereas for the acquisition of force or torque, different methods are used in different literature (such as sensor measurement in Fig. 2.5 [74], Hill model calculation, etc.).

Fig. 2.5. Wrist force collection.

(Reprinted with permission from L. Nielsen, S. Holmgaard, N. Jiang, et al., Enhanced EMG signal processing for simultaneous and proportional myoelectric control, in: EMBC 2009 Annual International Conference (2009), 4335–4338.)Copyright © 2009 IEEE.

In the black-box method, the literature [76–78] studied the relationship between finger force and EMG, as shown in Fig. 2.6. In Mojgan et al. [79], a wrist torque estimation model was established for different wrist angles. Although more accurate results have been obtained in finger force prediction, this research focused on the isometric contraction of muscles, that is, they require that subjects should not change in spatial position, their power generation modes are relatively single, and there are more stringent experimental conditions.

Fig. 2.6. Finger force prediction based on sEMG.

(Reprinted with permission from C. Castellini, K. Risto, Using surface electromyography to predict single finger forces, in: IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, 2012, 1266–1272. ©2012 IEEE.)

In Wagner et al. [80], the nonlinear relationship in the elbow, between the EMG of the triceps/biceps and muscle strength at a specific angle, was obtained from the biomechanical point of view. A similar experiment was conducted by Hashemi et al. [81]; although the value of elbow joint angle was increased to 7 in the experiment, the strength prediction at each angle was still calculated independently and would be greatly affected in practical application limits, as shown in Fig. 2.7. In Atoufi et al. [82], multi-degree-of-freedom force prediction was achieved using synergistic features and NNs with an accuracy rate of 0.84 ± 0.08. Compared with the mean absolute value (MAV) features, the prediction of synergistic features was proved to be superior to that of the MAV in the time domain. Meanwhile, the prediction of the unknown force is studied and analyzed, and the result shows that the prediction of the unknown force is better than the MAV feature. Although a high prediction accuracy has been obtained, the literature does not study the discrimination of wrist movements. However, in actual control rehabilitation equipment, the discrimination of movements is the basis of force prediction.

Fig. 2.7. 1-DOF exoskeleton testbed used for collecting sEMG, elbow angle, and force data.

(Reprinted with permission from J. Hashemi, E. Morin, P. Mousavi, et al., EMG-force modeling using parallel cascade identification, J. Electromyogr. Kinesiol. 22 (3) (2012) 469. ©2012 Elsevier.)

In Jiang et al. [83], a universal model of EMG signal was proposed. Through the model analysis, under certain conditions, a linear relationship between the EMG signal and the degree of muscle activation was obtained. Then the matrix factorization algorithm was used to directly predict the muscle force. Although this reference avoids the traditional training learning algorithms, it is thought that it is possible to provide synchronous and proportional control information for the myoelectric prosthesis. However, the forecast result is not very satisfactory, with a maximum of 89.6% and the lowest of 52.2%, with an average of only 77.5%.

Bai et al. [84] used continuous wavelet transform to extract features for dynamic muscle contraction and used a NN to predict muscle force. By analyzing the data of 14 experimenters, the final prediction accuracy is 0.9398 ± 0.0230, and the mean square error is 0.1701 ± 0.047.

The black-box method cannot provide insight into the biomechanical process of human movement. A possible solution to this problem is to estimate the force/torque applied by operators based on the musculoskeletal model. A three-element Hill model can describe the mechanism of skeletal muscle contraction macroscopically and is widely used in interactive interface design. On this basis, a variety of skeletal muscle contraction models have been developed, which are used for muscle strength calculation and interface design [85]. In Tao et al. [86], the function of the joint angle is used to describe the length of muscle fiber, whose parameters are obtained on the basis of experiments, which cannot be explained from physiology and the range of application is limited. The muscle fiber length, and bone- and joint-related physiological parameters required in the skeletal muscle model are difficult to obtain, and there are large differences among the different individuals. According to the premise of limb orthopedic research, a detailed, human body, lower limb model diagram was provided by Delp et al. [87], which provides practical parameters for the calculation of a lower limb's muscle contraction force arm and tendon length.

Lloyd et al. researched a skeletal muscle model for the knee joint. The models can predict joint torque in various sports states, such as running, walking, and so on, and the average coefficient of prediction is 0.91 [88]. Shao et al. established a skeletal muscle model for the ankle joint. The average coefficient of judgment is 0.92, and the average standardized root means square error is 12.2% [89]. Similarly, Son et al. [90] used the skeletal muscle model to evaluate the joint torque at different speeds. In the result of the final parameter optimization prediction, the coefficient of determination is < 0.91.

Due to these models using the average skeletal muscle geometry model to analyze the muscle arm and length, the torque prediction accuracy is lower than the prediction accuracy in this paper. The skeletal muscle model of the knee joint was greatly simplified by Ma et al… only considering two muscles (extensor and flexor) in the study of model. Although they greatly simplify the complexity of the modeling, the final moment prediction effect is also greatly affected, of which the decision coefficient is only 0.85 [91]. In addition, Tasia et al. used MRI to directly measure muscle path and built a personalized skeletal muscle geometric model [92] by using the measured data. The experimental results show that using this image processing technique can greatly improve the accuracy of the assessment of joint torque. However, this kind of equipment is usually expensive, and image processing takes a long time. Therefore, it is crucial to establish a personalized skeletal muscle model with relatively simple equipment. Moreover, most of existing musculoskeletal models fail to provide a feasible method to study the subject-specific musculoskeletal geometry, which is vital to calculate muscle contraction force and to understand each muscle's contribution to total joint torque. Furthermore, the applications of these models were limited to clinical diagnosis and management of orthopedically conditions.

Energy Efficiency in Building Renovation

Constantinos A. Balaras, ... Faidra Filippidou, in Handbook of Energy Efficiency in Buildings, 2019

5.2 HVAC System Modeling and Dynamic Simulation

To correctly choose the most suitable retrofit options to improve HVAC energy efficiency, accurate models of the overall system (both in current and retrofitted states) are essential.

Dynamic simulation is a useful procedure to model the whole system. Suitable models are typically classified in white-, gray-, and black-box methods [5–9]. The first class is based on the solution of the mass and energy equations (i.e., physics-based models). The third class does not include any knowledge of the actual physical structure, but an empirical transfer function between input and output variables is derived by a large amount of data and/or measures (i.e., data-driven methods such as artificial neural networks). The second class has an intermediate approach: it employs a simplified physical model of the system, using experimental data to tune model’s parameters. White-box models are able to provide highly accurate results for a great variety of solutions but also require accurate input parameters (e.g., boundary conditions, geometry, and thermo-physical properties). Furthermore, the provided results are valid only for the specific case under analysis. On the contrary, gray-box models require less accurate inputs and can provide general indications on similar systems; moreover, if more “simplified” models are used, the integration of several subsystems models is easier. These methods represent an appropriate tradeoff between implementation efforts and solution accuracy.

With reference to physics-based tools, there exists several dynamic simulation software, such as EnergyPlus, DOE-2, TRNSYS. Each of these tools has specific features that are summarized in Crawley et al. [10].

Dynamic simulations and integrated models of all the subsystems allow a correct estimation of the energy savings after retrofitting, helping stakeholders and professionals in the definition of the hierarchical list of retrofit actions in cost-benefits terms.

This procedure is also useful to analyze the overall system from other viewpoints, such as internal microclimate suitability or resilience of the system to critical conditions. Two examples will clarify these two aspects.

Example #1 (multi-objective): In the retrofitting of a HVAC system of an existing museum building, dynamic simulation allows the evaluation of the energy requirements for each energy vector (e.g., natural gas, electrical energy) and of the internal profiles of air temperature and relative humidity. For artwork preservation, the maintenance of specific values of these two microclimate parameters is necessary [11,12]. Using a dynamic simulation, it is possible to set, as outputs, energy requirements at the HVAC system and internal conditions of the environment: then, the HVAC system can be designed and controlled to concurrently achieve energy efficiency and suitable microclimate maintenance [12b].

Example #2 (resilience): In the retrofitting of a HVAC system, it is possible to use dynamic simulations to check the resilience of the technical equipment in critical conditions or, more generally, to evaluate energy requirements in off-design conditions. For example, one can evaluate the performance of HPs in harshest winter conditions. Another example can be the evaluation of the effectiveness of an AHU for the maintenance of IAQ (indoor air quality) standard in case of huge internal gains [12c].

Methods for benchmarking building energy consumption against its past or intended performance: An overview

Zhengwei Li, ... Peng Xu, in Applied Energy, 2014

4.2 Performance of black box methods

Due to its convenience and quick modelling, black box methods are good alternatives to detailed energy simulation method. Although all derived from data mining techniques, the principle embedded in different black box methods still causes a difference in their characteristics. Bin method is the simplest method, yet perhaps one of the most widely applied methods. The tools (WBD and PACRAT) that deploy it have been applied in numerous buildings for continuous commissioning. Multiple linear regression (MLR) technique is the simplest technique, and has been adopted by ASHRAE as a standard Measurement & Verification (M&V) technique [58]. Artificial neural network (ANN) method is arguably the most widely used non-linear regression method in building continuous commissioning, and has achieved success in many applications. However, as it requires tweaking the inputs, network structure and weight parameters, its accuracy can’t be guaranteed. In some cases, it could perform worse than MLR. Support vector regression (SVR) is a unique regression method in that it optimizes both the model structure and the estimator error, and its performance has been proved by many researchers, including winning entry in load prediction competitions, thus is worth investigating in real applications. Gaussian process regression (GPR) is the only method among black box methods that explicitly calculates the uncertainty of the estimation result. Combining this with its nonlinear regression nature and ability to capture complex behavior, it should get wider application in cases requiring risk analysis.

Review of 10 years research on building energy performance gap: Life-cycle and stakeholder perspectives

Patrick X.W. Zou, ... Jiayuan Wang, in Energy and Buildings, 2018

5.2.1 Technology and method for calculating energy consumption

T&M for estimating energy consumption aims at improving the accuracy and rationality of design as well as design optimization, which can be categorized into black box method, grey box method and white box method [81]. A black box method predicts energy consumption without physical knowledge, such as genetic algorithm [82] and artificial neural networks [83]. In contrast, the white box method, also termed as engineering method, estimates energy consumption by using thermodynamic equations to represent the physical behavior of building and its interactions with external environment according to its physical description [84,85]. EnergyPlus, Ecotect, and DOE-2 are mature tools based on white box method. However, they require a lot of data and the output is difficult to calibrate [84]. The grey box methods are the combination of the black and white box method, which eliminates the limitations inherent in each method [41]. For example the combination of EnergyPlus with particle swarm optimization [86] and genetic algorithm [87]. Although the above methods provide sharp tools for researchers and practitioners to predict and optimize energy consumption, they need to pass a serious of rigorous tests, otherwise researchers and practitioners could be using a wrong method to solve problems. Considering the importance of calibration, many methods for model calibration were proposed, such as Bayesian approach [88] and iterative update approach [89].

Methods for interpreting and understanding deep neural networks

Grégoire Montavon, ... Klaus-Robert Müller, in Digital Signal Processing, 2018

9 Conclusion

Building transparent machine learning systems is a convergent approach to both extracting novel domain knowledge and performing model validation. As machine learning is increasingly used in real-world decision processes, the necessity for transparent machine learning will continue to grow. Examples that illustrate the limitations of black-box methods were mentioned in Section 8.1.

This tutorial has covered two key directions for improving machine learning transparency: interpreting the concepts learned by a model by building prototypes, and explaining the model's decisions by identifying the relevant input variables. The discussion mainly abstracted from the exact choice of deep neural network, training procedure, or application domain. Instead, we have focused on the more conceptual developments, and connected them to recent practical successes reported in the literature.

In particular we have discussed the effect of linking prototypes to the data, via a data density function or a generative model. We have described the crucial difference between sensitivity analysis and decomposition in terms of what these analyses seek to explain. Finally, we have outlined the benefit in terms of robustness, of treating the explanation problem with graph propagation techniques rather than with standard analysis techniques.

This tutorial has focused on post-hoc interpretability, where we do not have full control over the model's structure. Instead, the techniques of interpretation can be applied to a general class of nonlinear machine learning models, no matter how they were trained and who trained them – even for fully trained models that are available for download like BVLC CaffeNet [28] or GoogleNet [67].

In that sense the presented novel technological development in ML allowing for interpretability is an orthogonal strand of research independent of new developments for improving neural network models and their learning algorithms. We would like to stress that all new developments can in this sense always profit in addition from interpretability.

A comprehensive overview on the data driven and large scale based approaches for forecasting of building energy demand: A review

AhmadTanveer , ... WangJiangyu , in Energy and Buildings, 2018

3.3 Black box data mining based approaches

An extensive amount of black box approaches applied building level energy consumption investigation predicts the electricity usage behavior at a for the LS of buildings instead single building. And although black box approaches obtain extensive use of power consumption, according to the assortment of hierarchically significant data inputs [213–215], different examples of the applicability of these strategies in large-scale presented in reference [216,217]. The conventional widely used the black-box methods for forecasting and prediction at building sector are [218]: MLR, the model of simple regression, ANN, Decision Trees (DT), and support vector machine. Data driven based approaches are dependent upon the accessibility of previous electricity usage data to predict the energy efficiency and performance.

Based on this challenge, it is essential to develop a pattern to train the different methods. Another issue arises, however, whereby privacy of the data policy and financial affairs result in difficulty with the collection process, often diminishing the quality of decisions. Geographic erudition arrangement is emerging as an increasingly significant source in which produce large scale energy methods. This is because their capabilities of and visualizing and allocating as presented in references [219–222].

Although challenges continue here as well since a limited number of GIS databases include appropriate for grasping the energy consumption achievement of a town or city. Other pertinent data sets comprise: census [223], national reserves [224], standardizing [224], local and national surveys [225], inquiries [219] and environmental data. Latterly, different novel erudition gathering techniques like as masses energy data sourcing become emerged for developing and populating entire-city databases [226–229]. New and different knowledge need to be concentrated, however, which is dependent upon the energy prediction methodologies. Here the most relevant elements are: development limit, employment (surface/volume), floors quantity, orientation, surface/glazing factor, solar amount of different floors, solar shading, orientation, windows area electricity usage at aggregated level or building level, electricity measures erudition, and environmental data. Table 9 shows the summary of building sector energy forecasting using large scale based approaches.

Table 9. Summary of building sector energy forecasting using large scale based approaches.

Model categoryBuilding sector energy forecasting using large scale based approaches
Energy forecasting white box data mining based approaches, Grey box data mining based approaches, Black box data mining based approaches
Prediction modelArchetypal model [206], Back Propagation Neural Networks(BPNN) [215], Grid-based methods [214], Thermal model [208], Simplified model [212], Data driven (black-box) method [210], Simplified engineering [215], Regression models [206], FCM models [214], Linear model [210], Hierarchical clustering method [214], The mathematical models [208], NARX models [217], Parametric archetypal model [206], Conventional GMDH method [215], Ant colony optimization algorithm [214], Steady-state models [212], Explicit Euler methods [208], lumped parameter models [210], CDA method [218], Fourier series Methods [208], Empirical approach [224], Geographic Information System (GIS) techniques [220], City-oriented simulation tools [212], Elaborate engineering methods [215], K-means algorithm [214], Support vector machines (SVM) [213], Two-node model [208], Model-based methods [214], Nodal network model [212], Bottom-up models [218], Conditional demand analysis [218], Thermal network models [211], Moore's method [212], The physical model [208], ARX-models [210], Linear ARX model [217], Two conditional parametric models [210], Nonlinear ARX model [217], Full-knowledge physical model [212], Linear and time-invariant system [212], Empirical risk minimization (ERM) principle [213], Engineering-based approaches [222], Fuzzy c-means(FCM) [214], Hybrid method [215], Hierarchical methods [214], Partitioning methods [214], Statistical approaches [224], Density-based methods [214], Iterative refinement clustering [214], Hybrid method of Group Method of Data Handling (GMDH) [215], Least Square Support Vector Machine (LSSVM) [215], Black-box approaches [217], Artificial Intelligence Methods [215], Nonlinear Autoregressive Model with Exogenous Inputs) [217], Data mining algorithms [215], GA model [218], GIS-based statistical downscaling approach [222], Integration of system dynamics models [224]
SoftwareTRNSYS, eQuest, IES and ESP-r, Energy Plus, DOE-2.1E, ESP-r, MATLAB, SPSS, R
Application and usageBuilding energy simulation applications, Predict energy demand, Application in forecasting analysis provides promising results for forecasting building electricity usage, Uses simple survey information and billing data, Encompasses trends
AdvantagesSimple, efficient and scalable, Easy to implement, Strong ability of antinomies, Can recognize the numerous vital features including self-stability, Can be implemented without any problem, Network can execute the task that a linear program cannot, Can be executed in any application, Large scale energy forecasting models can learn and no need to be programmed, Long-term prediction in the inadequacy of each dis-continuity, Comprises inhabitant response, Addition of socioeconomic and macroeconomic impacts, Inclusion of macroeconomic and socioeconomic effects, Determination of typical end-use energy contribution, Simple input information, Ascertainment of qualities end-use comprises on simulation, Ascertainment of every consumer's electricity usage by raging, type, etc.,

Energy prediction techniques for large-scale buildings towards a sustainable built environment: A review

Abdo Abdullah Ahmed Gassar, Seung Hyun Cha, in Energy and Buildings, 2020

5 Conclusions

In this paper, we presented a review of the previous research efforts in the area of large-scale building energy predictions using the black-box, white-box and grey-box based approaches. Different aspects of large-scale building energy predictions were covered, including the building energy demand determinants, building types (i.e., residential, commercial, office buildings) and prediction scopes (i.e., group of buildings, a city district, city, national and regional levels). Different models were developed within different scopes depending on specific data and the different approaches that used different features for energy predictions. Each approach had its own disadvantages and advantages for different types of applications. Among all these approaches, the black-box based approach has gained the most attention, specifically the artificial neural network, because of its performance accuracy and its ability to deal with the linear and nonlinear problems during training of the models. However, the drawbacks of black-box methods are the reliance on historical data and difficulty in physically interpreting the results as well as the lack of explicit representation of end-users. White-box based models are also distinguished by their accuracy, physical interpretation of the results, and the explicit representation of end users. However, the drawbacks of black-box based methods are the reliance on detailed physical information, the assumption of user behavior and the absence of the influence of economic factors. In this regard, the hybrid models (i.e., grey-box based models) can play a significant role in overcoming the deficiencies of both previous methods in the area of large-scale building energy prediction applications.

Based on the above analysis, we propose promising future research opportunities that may strengthen the previous approaches as follows: 1) enriching applications of energy prediction techniques to cover different scales of built environments under a wide range of climate conditions, 2) taking advantage of the available data such as censuses and GIS databases to evaluate the performance of various methods in the context of large-scale buildings, 3) modifying the frameworks of current approaches in the context of large-scale building performance features to more accurately respond to specific demands from energy calculations, 4) employing optimizations in the hybrid methods to optimize the data-driven algorithm architecture, 5) extending energy prediction methods for investigating both short-term and long-term building energy consumption, and 6) integrating multiple target indices in the data-driven and hybrid framework to deliver a more balanced evaluation of the energy performance of large-scale buildings. Progress along these routes will offer more efficient and reliable support for energy management and optimization in future sustainable built environment industries.

本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
Continuous Integration
(62) What is the best way to get up to speed with a new code base?
Marginally Interesting by Mikio L. Braun
Why Get Into Machine Learning? Discover Your Personal Why and Use our Handy Map | Machine Learning M
Expectation-propagation for weak radionuclide identification at radiation portal monitors
Java Test Tools
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服