Creating Standards for Industry-wide Quality Metrics

RT-313 Topic Summary
RT 313

Overview

RT-313 was tasked with creating a standard metric for quality that can be used to effectively measure, categorize, and benchmark quality performance across the project delivery process. The capital facilities delivery industry recognizes the need to measure quality, but the fact that many projects utilize different execution models makes it difficult to standardize quality metrics. This lack of standardization leads to variability in performance assessment that can negatively affect an organization’s bottom line.

RT-313 developed a quality metrics framework to create industry-wide benchmarks that would be available for internal and external performance assessments, and to deliver a method for evaluating and measuring the effectiveness of new processes or ideas. The analysis of metrics can be used to improve project processes and reduce risks, waste, and errors, resulting in projects that meet desired performance, cost, and schedule targets. A consistent quality metric will lead to uniform improvement in the capital facilities delivery industry.

The RT-313 research determined that the following three enablers allow for measuring quality:

  1. A common language allows each organization to use the existing data on unplanned outcomes (e.g., audit findings, observations, inspections, or test results) or unplanned events (e.g., variations to design, defects in products or errors in services requiring correction, and failure to provide a satisfactory deliverable to customer), using common definitions to map and filter them and providing a consistent measurement technique from project to project or between organizations.
  2. A common quality metric can be applied throughout the entire capital project delivery life cycle. The metric can be utilized by all parties involved in the capital facilities delivery industry – owners, engineers, constructors, contractors, and suppliers – to benchmark performance in their organizations or within the industry. Since it replicates safety metrics, the common quality metric can be easily understood and adopted within an organization.
  3. A defined work process or approach can enable each organization to establish a value case to meet its business plan and to use the metric to benchmark comparisons between projects, as well as to drive continuous improvement.

 

Key Findings and Implementation Tools

1 : Common Language

The common language is documented in the Quality Pyramid, similar to the Safety Pyramid. Definitions of the terms follow the pyramid.

Unplanned Events:

  • Failure: Does not meet specifications, not detected until turned over to intended user
  • Defect: Does not meet our specifications, must be corrected
  • Variation: Does not meet our specifications, one time acceptance

 Unplanned Outcomes of Planned Activities:

  • Finding: Evidence from Planned Activities that we are not following our practices

 Planned Activities:

  • Appraisal: Team checks; assessments that we follow our practices
  • Prevention: Quality Management Activities, e.g. training. 
Reference: (RS313-1)

2 : Common Quality Metric

RT-313 developed the Quality Performance Rate (QPR):



where:
QPR = Quality Performance Rate
Ni = Unplanned quality event i (includes the lagging indicators; variations, defects, and failures)
wi = Weight of severity level for quality event i (see Key Finding #3)
Reference: (RS313-1)

3 : Severity Model for QPR

Within each unplanned event, a severity scale can be used to further assess the impact on the project from defects and failures. RT-313 utilized qualitative severity levels of Low (L), Medium (M), and High (H) for defects and failures. The levels are non-quantitative and are established by each organization based on the quality event’s estimated impact on the project safety, cost, and schedule. Note that the team assessed that variations do not require a severity scale, as project impact relative to safety, cost, and/or schedule is always considered to be Low, given the as-identified quality issue is accepted as-is.


 

Note : % = Impact of Quality Event as a Percent of the Project Total Value.

Severity Weighting Scale (wi):

Variation = 1

Defect – Low Severity = 2

Defect – Medium Severity = 4

Defect – High Severity = 6

Failure – Low Severity  = 8

Failure – Medium Severity = 16

Failure – High Severity = 24

Reference: (RS313-1)

4 : Common Work Process

RT-313 developed a Common Work Process to help organizations implement a process to establish a QPR for one project or across the organization’s portfolio of projects. The process is a roadmap designed to help the organization’s quality management team to prepare a value improvement presentation to the leadership team. The roadmap provides a grass root (step-by-step) approach for piloting project implementation, accessing the existing quality data within the organization, developing a collection plan, analysis of the data, and ultimately determining whether the QPR should become a standard metric for the organization.

Reference: (RS313-1)

5 : QPR Spreadsheet

RT-313 created a spreadsheet to assist organizations in consolidating and analyzing quality performance rate (QPR) data at the project level. The spreadsheet calculates the periodic (i.e., monthly or quarterly) and cumulative QPR based on the frequency and severity of quality events and labor hours expended
Reference: (RS313-1)

Key Performance Indicators

Improved quality performance, Reduced risk

Supporting Resources

Presentations (CII Annual Conference & Workshops)

Session - Quality Made Measurable – A Paradigm Shift

Publication Date: 08/2017 Presenter: Number of Slides: 40 Event Code: AC2017


Tags