short course & workshop

This two-day POD short course is based on the new (2009) MIL-HDBK-1823A “Nondestructive Evaluation System Reliability Assessment” and uses the all-new **mh1823 POD** software.

The training provides the latest methods for measuring your NDE system’s effectiveness as measured by Probability of Detection (POD), and the course will use these state-of-the-art techniques to analyze enterprise data.

Course layout is reverse-chronological – we discuss the analysis before we discuss how to design the experiment to produce the results we are analyzing.

- 2018 Workshop Addendum software has been significantly expanded to provide analysis capability for situations where min(POD) > 0 (POD “floor”) and max(POD) < 1 (POD “ceiling”). See POD “Floor” and POD “Ceiling.”
- Also new to the Addendum is the Akaike Information Criterion (AIC) and the Schwarz Information Criterion (also known as the Bayes Information Criterion, BIC) are used to assess the efficacy of the added POD model floor/ceiling parameter(s).
- In addition to supporting the new analysis capabilities, the Workshop software allows for real-time demonstrations of the mechanics of constructing confidence bounds on hit/miss POD vs size curves.

We will work through examples using real data, and time will be allocated for analyzing your enterprise-specific problems.

The usual class size is 20 participants (35 max) and, in addition to the classroom presentations, includes the following items for each participant:

- A CD containing the
**R**statistics computing environment and the**mh1823 POD**software and the new**2018 Workshop Addendum software**, so that each participant with a Windows laptop can perform the analyses immediately. - Bound hard-copy of presentation slides.
- Indexed pdf copy of the new (2009)
**MIL-HDBK-1823A**handbook.

- 1970s Have Cracks – Will Travel
- Early 1980s – Flight propulsion manufacturers’ individual efforts to improve POD analysis
- Late 1980s – USAF, UDRI, GEAE, P&W, and Allied-Signal (now Honeywell) working group produced MIL-HDBK-1823, “Nondestructive Evaluation System Reliability Assessment” draft. I was the editor and lead author.
- 1993 – NATO AGARD sponsored 2-day POD Short Course based on MIL-HDBK-1823 that I presented in Ankara, Turkey, Lisbon, Portugal, Patras, Greece, and Ottawa, Canada.
- Late 1990s – USAF officially publishes
**MIL-HDBK-1823**, 30 April, 1999 - Early 2000s – Model-Assisted POD gains a following
- February, 2007 – Draft of revised and updated
**MIL-HDBK-1823**released for comment, with all-new software incorporating the latest statistical best practices for NDE data. - 7 April, 2009 – The 2007 update was released by the USAF as
**MIL-HDBK-1823A**.

- What is Probability? (Two incompatible definitions; both are correct)
- What is Probability of Detection?
- What is Confidence and how is that distinct from Probability?
- What is likelihood? How is it related to, but distinct from, probability?
- What does “90/95” really mean?
- Are all methods for assessing
*a*_{90/95}equally effective? (Answer: No.)

- Are all methods for assessing
- 2 kinds of NDE data. (There are more, but this is a two-day course)

This short-course comes with a self-contained CD with **R** installed along with the necessary ancillary **R** routines, the installed **mh1823 POD** software, and the example datasets – everything. You put the CD in the drive, make a desktop icon and you’re up and running in 30 seconds. If you already made the icon, put the CD in the drive and click the icon and you’re running in 5 seconds. (We will, for completeness, spend some class time to demonstrate how to install R from the internet, and then how to install the **mh1823 POD** package.)

- Background:
- The “ideal”
a curve*POD(a)* - Why \(\hat{a} \textit{ vs a}\) data is different from
data*Hit/Miss* - When \(\hat{a}\) response is less informative than simple
*Hit/Miss*

- The “ideal”
- \(\hat{a} \textit{ vs a}\) Data Analysis
- Read \(\hat{a} \textit{ vs a}\) data
- Preliminary Data Assessment: Plot the data and choose the best \(\hat{a} \textit{ vs a}\) model.

- Build the \(\hat{a} \textit{ vs a}\) linear model
- Four \(\hat{a} \textit{ vs a}\) Requirements (Warning: If any of these assumptions is false, or, if the model is a line and the data describe a curve, then the subsequent POD analysis will be
*wrong*even though the computational steps are correct.)

- Four \(\hat{a} \textit{ vs a}\) Requirements (Warning: If any of these assumptions is false, or, if the model is a line and the data describe a curve, then the subsequent POD analysis will be
- How to go from \(\hat{a} \textit{ vs a}\) to
– The Delta Method*POD vs. a*- Compute the transition matrix from \(\hat{a} \textit{ vs a}\) to
*POD vs. a* - The
Curve*POD(a)*

- Compute the transition matrix from \(\hat{a} \textit{ vs a}\) to
- Wald method to compute \(\hat{a} \textit{ vs a}\) confidence bounds
- Plot
; compute POD confidence bounds*POD(a)*

- Plot
- Analyze the noise; compute the false-positive rate

- Read \(\hat{a} \textit{ vs a}\) data
**Classwork –**- Analyze a simple \(\hat{a} \textit{ vs a}\) example.
- Effects of analysis decisions on
*a*_{90/95}

**(Multiple inspections of the same Target Set)**

- Why repeated measures are not simply “more data”
- Red apples and green apples

- How to recognize pathological
*â vs. a*data (which is unfortunately common) - Special difficulties with Field-Finds – When mh1823 methods are not enough

- Understanding Noise
- Definition of Noise
- Choosing a probability density to describe the noise

- False Positive Analysis (with
*â vs. a*data) - Noise analysis and the Combined
*â vs. a*Plot - The
Curve*POD(a)* - Miscellaneous
**mh1823 POD**algorithms

- Hands-on individual POD problem-solving

- Understanding binary data – why ordinary regression methods fail
- Read
data*Hit/Miss* - Build the GLM (Generalized Linear Model)
- Understanding Generalized Linear Models
- Choosing Link Functions

Confidence Bounds*Hit/Miss*- Not all statistical confidence methods are equally accurate
- How the LogLikelihood Ratio Criterion Works
- How to compute likelihood ratio confidence bounds
- Constructing
Confidence Bounds*Hit/Miss*

**Classwork –**- Analyze a simple
example.*Hit/Miss* - Effects of
analysis decisions on*Hit/Miss**a*_{90/95}

- Analyze a simple
**Special Situations**- Choosing an Asymmetric Link Function
- How to analyze Repeated Measures
- How to analyze Disparate Data correctly
- How to analyze
Noise*Hit/Miss* - How to recognize
pathological data*Hit/Miss*

– How to analyze Binary POD Floor/Ceiling Data

- How to plot max(loglikelihood ratio) as a function of a 3rd POD model parameter
- How to construct confidence bounds on the Floor or Ceiling parameter
- How to compute the Akaike Information Criterion (AIC) and the Swartz Criterion (Bayes Information Criterion, BIC)
- How to create a real-time animated construction of confidence bounds on POD vs size curves.

- Hands-on individual POD problem-solving

- What is Statistical Experimental Design?
- Variable types
- Nuisance variables
- Objective of Experimental Design
- Factorial experiments
- Categorical variables
- Noise – Probability of False Positive (PFP)
- How to Design an NDE Experiment
- Philosophy of NDE demonstrations
- How many specimens are enough?
- Specimen Design, Fabrication, Documentation, and Maintenance
- Examples of NDE Specimens

- How to avoid common POD analysis Mistakes
- Model-Assisted POD (MAPOD)
- False Positives
- Sensitivity and Specificity
- Receiver Operating Characteristic (ROC) Curve