mh1823 POD
short course & workshop

This two-day POD short course is based on MIL-HDBK-1823A “Nondestructive Evaluation System Reliability Assessment” and uses the all-new mh1823 POD software.

The training provides the latest methods for measuring your NDE system’s effectiveness as measured by Probability of Detection (POD), and the course will use these state-of-the-art techniques to analyze enterprise data.

Course layout is reverse-chronological – we discuss the analysis before we discuss how to design the experiment to produce the results we are analyzing.

Workshop Edition Software
January 2023 

  • 2023 Workshop Edition software has been expanded to provide analysis capability for situations where min(POD) > 0 (POD “floor”) and max(POD) < 1 (POD “ceiling”).  See POD “Floor” and POD “Ceiling.”
  • Profile loglikelihood plots of max achievable LLR as a function of an added “floor” or “ceiling” parameter provide numerical and visual assessments of the need for an added model parameter. See screen-shots of Workshop Edition menu. (This isn’t the only Workshop Edition menu difference, of course.)
  • The Akaike Information Criterion (AIC) and the Schwarz Information Criterion (also known as the Bayes Information Criterion, BIC) are also used to assess the statistical significance of the added POD model floor/ceiling parameter(s). See an example from the Workshop Edition software.
  • Workshop Edition software allows for real-time demonstrations of the mechanics of constructing confidence bounds on hit/miss POD vs size curves based on your data.
  • Workshop Edition software facilitates identification and further analysis of aberrant datasets, with on-plot point&click identification. See screen-shots of Aberrant  case example.
  • Enhanced loglikelihood surface analysis with plotting of individual centroids of datasets having more than one response column.

The on-site class size is 10 participants (15 max).  In the Workshop we will work through examples using real data, and time will be allocated for analyzing your enterprise-specific problems.  In addition to the classroom presentations, each participant will be given:

  • A CD containing the R statistics computing environment and the Standard mh1823 POD software (v7.15, June 2022) and the 2022 Workshop Edition software, so that each participant with a Windows laptop can perform the analyses immediately.
  • Bound hard-copy of presentation slides to facilitate note-taking.  (It’s hard to sketch slides, take notes AND listen.)
  • Indexed pdf copy of the 2009 MIL-HDBK-1823A handbook.

Course Content Details – Day One:

50 Years of Quantitative POD History (Understanding how we got here.)

  • 1970s Have Cracks – Will Travel
  • Early 1980s – Flight propulsion manufacturers’ individual efforts to improve POD analysis
  • Late 1980s – USAF, UDRI, GEAE, P&W, and Allied-Signal (now Honeywell) working group produced MIL-HDBK-1823, “Nondestructive Evaluation System Reliability Assessment” draft. I was the editor and lead author.
  • 1993 – NATO AGARD sponsored 2-day POD Short Course based on MIL-HDBK-1823 that I presented in Ankara, Turkey, Lisbon, Portugal, Patras, Greece, and Ottawa, Canada.
  • Late 1990s – USAF officially publishes MIL-HDBK-1823, 30 April, 1999
  • Early 2000s – Model-Assisted POD gains a following
  • February, 2007 – Draft of revised and updated MIL-HDBK-1823 released for comment, with all-new software incorporating the latest statistical best practices for NDE data.
  • 7 April, 2009 – The 2007 update was released by the USAF as MIL-HDBK-1823A.

Probability and Confidence

  • What is Probability? (Two incompatible definitions; both are correct)
  • What is Probability of Detection?
  • What is Confidence and how is that distinct from Probability?
  • What is likelihood? How is it related to, but distinct from, probability?
  • What does “90/95” really mean?
    • Are all methods for assessing a90/95 equally effective? (Answer: No.)
  • 2 kinds of NDE data. (There are more, but this is a two-day course)

How to install the mh1823 POD software and Workshop Addendum software

This short-course comes with a self-contained CD with R installed along with the necessary ancillary R routines, the installed mh1823 POD software, and the example datasets – everything. You put the CD in the drive, make a desktop icon and you’re up and running in 30 seconds. If you already made the icon, put the CD in the drive and click the icon and you’re running in 5 seconds. (We will, for completeness, spend some class time to demonstrate how to install R from the internet, and then how to install the mh1823 POD package.)

How to analyze \(\hat{a} \textit{ vs a}\) data

  • Background:
    • The “ideal” POD(a) a curve
    • Why \(\hat{a} \textit{ vs a}\) data is different from Hit/Miss data
    • When \(\hat{a}\) response is less informative than simple Hit/Miss
  • \(\hat{a} \textit{ vs a}\) Data Analysis
    • Read \(\hat{a} \textit{ vs a}\) data
      • Preliminary Data Assessment: Plot the data and choose the best \(\hat{a} \textit{ vs a}\) model.
    • Build the \(\hat{a} \textit{ vs a}\) linear model
      • Four \(\hat{a} \textit{ vs a}\) Requirements (Warning: If any of these assumptions is false, or, if the model is a line and the data describe a curve, then the subsequent POD analysis will be wrong even though the computational steps are correct.)
    • How to go from \(\hat{a} \textit{ vs a}\) to POD vs. a – The Delta Method
      • Compute the transition matrix from \(\hat{a} \textit{ vs a}\) to POD vs. a
      • The POD(a) Curve
    • Wald method to compute \(\hat{a} \textit{ vs a}\) confidence bounds
      • Plot POD(a); compute POD confidence bounds
    • Analyze the noise; compute the false-positive rate
  • Classwork –
    • Analyze a simple \(\hat{a} \textit{ vs a}\) example.
    • Effects of analysis decisions on a90/95

How to Analyze \(\hat{a} \textit{ vs a}\) data with Repeated Measures

(Multiple inspections of the same Target Set)

  • Why repeated measures are not simply “more data”
    • Red apples and green apples

Special Situations

  • How to recognize pathological â vs. a data (which is unfortunately common)
  • Special difficulties with Field-Finds – When mh1823 methods are not enough

How to Analyze Noise

  • Understanding Noise
    • Definition of Noise
    • Choosing a probability density to describe the noise
  • False Positive Analysis (with â vs. a data)
  • Noise analysis and the Combined â vs. a Plot
  • The POD(a) Curve
  • Miscellaneous mh1823 POD algorithms

Analysis of Enterprise â vs. a Data (optional topic)

  • Hands-on individual POD problem-solving

Course Content Details – Day Two:

How to analyze Binary (Hit/Miss) Data

  • Understanding binary data – why ordinary regression methods fail
  • Read Hit/Miss data
  • Build the GLM (Generalized Linear Model)
    • Understanding Generalized Linear Models
    • Choosing Link Functions
  • Hit/Miss Confidence Bounds
    • Not all statistical confidence methods are equally accurate
    • How the LogLikelihood Ratio Criterion Works
    • How to compute likelihood ratio confidence bounds
    • Constructing Hit/Miss Confidence Bounds
  • Classwork –
    • Analyze a simple Hit/Miss example.
    • Effects of Hit/Miss analysis decisions on a90/95
  • Special Situations
    • Choosing an Asymmetric Link Function
    • How to analyze Repeated Measures
    • How to analyze Disparate Data correctly
    • How to analyze Hit/Miss Noise
    • How to recognize Hit/Miss pathological data
  • Not covered in MIL-HDBK-1823A –
    How to analyze Binary POD Floor/Ceiling Data
    • How to plot max(loglikelihood ratio) as a function of a 3rd POD model parameter
    • How to construct confidence bounds on the Floor or Ceiling parameter
    • How to compute the Akaike Information Criterion (AIC) and the Swartz Criterion (Bayes Information Criterion, BIC)
    • How to create a real-time animated construction of confidence bounds on POD vs size curves.

Analysis of Enterprise Hit/Miss Data (Optional topic)

  • Hands-on individual POD problem-solving

Statistical Design Of eXperiments (DOX)

  • What is Statistical Experimental Design?
  • Variable types
  • Nuisance variables
  • Objective of Experimental Design
  • Factorial experiments
  • Categorical variables
  • Noise – Probability of False Positive (PFP)
  • How to Design an NDE Experiment
  • Philosophy of NDE demonstrations
    • How many specimens are enough?
    • Specimen Design, Fabrication, Documentation, and Maintenance
    • Examples of NDE Specimens

Miscellany – (Other things you should know)

  • How to avoid common POD analysis Mistakes
  • Model-Assisted POD (MAPOD)
  • False Positives
  • Sensitivity and Specificity
  • Receiver Operating Characteristic (ROC) Curve

Training Review & Course Wrap-up