mh1823 POD
Udemy short course & workshop

Please note that travel restrictions prevent me from conducting, face-to-face, my 2-day POD short course and workshop on the statistical foundations for the MIL-STD-1823A methods.  The course uses the advanced version of the mh1823POD software (available gratis by request to me) plus the all-new Addendum software available on this Udemy course and nowhere else.

Course layout is reverse-chronological – we discuss the engineering and statistical fundamentals, then the analysis, before we discuss how to design the experiment to produce the results we are analyzing.

The course uses proven methods of mathematical statistics (quite unlike the boring content of your Stats 101 class).  Understanding why these methods work is an integral part of the course and the foundation for the (now world standard) MH1823 POD software.

This page is under construction, however further details may be available by email request.  (I will use these responses to judge the potential demand for this course.)

Charles Annis, P.E.
Cha@Ga7Arles.@kUtNpAnnisGHPQE@StatisticalaRdZ-Engineering7eAa48.cAPKzjom
an@LalniscBF@d@gmailaRdZ.cAPKzjom

561-352-9699

N E W: 2021 Workshop Addendum

  • 2021 Workshop Addendum software has been significantly expanded to provide analysis capability for situations where min(POD) > 0 (POD “floor”) and max(POD) < 1 (POD “ceiling”).  See POD “Floor” and POD “Ceiling.”
  • Also new to the Addendum is the Akaike Information Criterion (AIC) and the Schwarz Information Criterion (also known as the Bayes Information Criterion, BIC) are used to assess the efficacy of the added POD model floor/ceiling parameter(s).
  • In addition to supporting the new analysis capabilities, the Udemy Workshop software allows for real-time demonstrations of the mechanics of constructing confidence bounds on hit/miss POD vs size curves.

Course Content Details – Part One:

50 Years of Quantitative POD History (Understanding how we got here.)

  • 1970s Have Cracks – Will Travel
  • Early 1980s – Flight propulsion manufacturers’ individual efforts to improve POD analysis
  • Late 1980s – USAF, UDRI, GEAE, P&W, and Allied-Signal (now Honeywell) working group produced MIL-HDBK-1823, “Nondestructive Evaluation System Reliability Assessment” draft. I was the editor and lead author.
  • 1993 – NATO AGARD sponsored 2-day POD Short Course based on MIL-HDBK-1823 that I presented in Ankara, Turkey, Lisbon, Portugal, Patras, Greece, and Ottawa, Canada.
  • Late 1990s – USAF officially publishes MIL-HDBK-1823, 30 April, 1999
  • Early 2000s – Model-Assisted POD gains a following
  • February, 2007 – Draft of revised and updated MIL-HDBK-1823 released for comment, with all-new software incorporating the latest statistical best practices for NDE data.
  • 7 April, 2009 – The 2007 update was released by the USAF as MIL-HDBK-1823A.

Probability and Confidence

  • What is Probability? (Two incompatible definitions; both are correct)
  • What is Probability of Detection?
  • What is Confidence and how is that distinct from Probability?
  • What is likelihood? How is it related to, but distinct from, probability?
  • What does “90/95” really mean?
    • Are all methods for assessing a90/95 equally effective? (Answer: No.)
  • 2 kinds of NDE data. (There are more, but this is a two-day course)

How to install the mh1823 POD software and Workshop Addendum software

This short-course comes with a self-contained CD with R installed along with the necessary ancillary R routines, the installed mh1823 POD software, and the example datasets – everything. You put the CD in the drive, make a desktop icon and you’re up and running in 30 seconds. If you already made the icon, put the CD in the drive and click the icon and you’re running in 5 seconds. (We will, for completeness, spend some class time to demonstrate how to install R from the internet, and then how to install the mh1823 POD package.)

How to analyze \(\hat{a} \textit{ vs a}\) data

  • Background:
    • The “ideal” POD(a) a curve
    • Why \(\hat{a} \textit{ vs a}\) data is different from Hit/Miss data
    • When \(\hat{a}\) response is less informative than simple Hit/Miss
  • \(\hat{a} \textit{ vs a}\) Data Analysis
    • Read \(\hat{a} \textit{ vs a}\) data
      • Preliminary Data Assessment: Plot the data and choose the best \(\hat{a} \textit{ vs a}\) model.
    • Build the \(\hat{a} \textit{ vs a}\) linear model
      • Four \(\hat{a} \textit{ vs a}\) Requirements (Warning: If any of these assumptions is false, or, if the model is a line and the data describe a curve, then the subsequent POD analysis will be wrong even though the computational steps are correct.)
    • How to go from \(\hat{a} \textit{ vs a}\) to POD vs. a – The Delta Method
      • Compute the transition matrix from \(\hat{a} \textit{ vs a}\) to POD vs. a
      • The POD(a) Curve
    • Wald method to compute \(\hat{a} \textit{ vs a}\) confidence bounds
      • Plot POD(a); compute POD confidence bounds
    • Analyze the noise; compute the false-positive rate
  • Classwork –
    • Analyze a simple \(\hat{a} \textit{ vs a}\) example.
    • Effects of analysis decisions on a90/95

How to Analyze \(\hat{a} \textit{ vs a}\) data with Repeated Measures

(Multiple inspections of the same Target Set)

  • Why repeated measures are not simply “more data”
    • Red apples and green apples

Special Situations

  • How to recognize pathological â vs. a data (which is unfortunately common)
  • Special difficulties with Field-Finds – When mh1823 methods are not enough

How to Analyze Noise

  • Understanding Noise
    • Definition of Noise
    • Choosing a probability density to describe the noise
  • False Positive Analysis (with â vs. a data)
  • Noise analysis and the Combined â vs. a Plot
  • The POD(a) Curve
  • Miscellaneous mh1823 POD algorithms

Course Content Details – Part Two:

How to analyze Binary (Hit/Miss) Data

  • Understanding binary data – why ordinary regression methods fail
  • Read Hit/Miss data
  • Build the GLM (Generalized Linear Model)
    • Understanding Generalized Linear Models
    • Choosing Link Functions
  • Hit/Miss Confidence Bounds
    • Not all statistical confidence methods are equally accurate
    • How the LogLikelihood Ratio Criterion Works
    • How to compute likelihood ratio confidence bounds
    • Constructing Hit/Miss Confidence Bounds
  • Classwork –
    • Analyze a simple Hit/Miss example.
    • Effects of Hit/Miss analysis decisions on a90/95
  • Special Situations
    • Choosing an Asymmetric Link Function
    • How to analyze Repeated Measures
    • How to analyze Disparate Data correctly
    • How to analyze Hit/Miss Noise
    • How to recognize Hit/Miss pathological data

N E W:  Not covered in MIL-HDBK-1823A
– How to analyze Binary POD Floor/Ceiling Data

  • How to plot max(loglikelihood ratio) as a function of a 3rd POD model parameter
  • How to construct confidence bounds on the Floor or Ceiling parameter
  • How to compute the Akaike Information Criterion (AIC) and the Swartz Criterion (Bayes Information Criterion, BIC)
  • How to create a real-time animated construction of confidence bounds on POD vs size curves.

Statistical Design Of eXperiments (DOX)

  • What is Statistical Experimental Design?
  • Variable types
  • Nuisance variables
  • Objective of Experimental Design
  • Factorial experiments
  • Categorical variables
  • Noise – Probability of False Positive (PFP)
  • How to Design an NDE Experiment
  • Philosophy of NDE demonstrations
    • How many specimens are enough?
    • Specimen Design, Fabrication, Documentation, and Maintenance
    • Examples of NDE Specimens

Miscellany – (Other things you should know)

  • How to avoid common POD analysis Mistakes
  • Model-Assisted POD (MAPOD)
  • False Positives
  • Sensitivity and Specificity
  • Receiver Operating Characteristic (ROC) Curve

Course Review & Wrap-up