X
GO

SigmaPro Publications

In support of your Lean, Lean Six Sigma, and Design for Six Sigma (DFSS) business transformation efforts, SigmaPro is pleased to provide access to the following technical publications.  These publications include Design for Six Sigma deployment, Design for Six Sigma project identification and selection, Axiomatic Design, requirements definition, multivariate statistical process control, measurement systems analysis, risk analysis, Lean Six Sigma deployment, Lean Six Sigma project identification and selection, and much more.

Filter by Category
Filter by Keyword

Publications List

01Jul

A Study into the Effectiveness of Quality Management Training

Ben Clegg | quality management, training, tools, success factors | All, Lean and Lean Six Sigma | 0 Comments | | View Counts (1627) |Return|
0 ( 0 reviews)
|

A study into the effectiveness of
quality management training


A focus on tools and critical success factors
Ben Clegg
Aston Business School, Birmingham, UK, and
Chris Rees and Mike Titchen
SigmaPro Ltd, Solihull, UK

 

Introduction
The purpose of this research was to find out how quality management training can be
practised more effectively. This study is timely because the popularity of approaches such as lean thinking (Womack and Jones, 1996) and Six Sigma (Eckes, 2001) have
grown dramatically over recent years (Henderson and Evans, 2000; Bendell, 2006) whilst the explicit use of total quality management (TQM) has declined (Clifford, 2001) over the same frame. This maybe because there have been many recent publications about cost savings by major international companies (Hahn et al., 1999; Chowdhury, 2001; Bhuiyan and Baghel, 2006) made using lean thinking and Six Sigma approaches instead of a TQM approach. However, as other companies strive to emulate these successes they often find it is not always as easy to achieve as others may have claimed (Lee-Mortimer, 2006).
In order to emulate previous successes it likely that quality professionals will need to be trained in statistical and heuristic tools as well as in the critical success factors (CSFs) (Rockart, 1979) of quality management (concerning people, implementation, organisational, leadership skills and other softer issues) irrespective of the overall approach named and used (Dahlgaard and Dahlgaard-Park, 2006). Therefore this study emphasises the role and importance of training to make tools and CSFs more effective rather than the high level design and evolution of overall methodological approaches such as TQM, lean management or Six Sigma.
Background
The theoretical and statistical basis for modern quality management is extremely well established and dates back to the first half of the twentieth century (Shewhart, 1939). Since then there have been a several “gurus” instrumental in furthering this body of knowledge (see Nonthaleerak and Hendry, 2006; Bhuiyan and Baghel, 2005) which include, but are not limited to, the works of Ishikawa (1968), Crosby (1979), Feigenbaum (1983), Deming (1986), and Juran (1992). The range and depth of available tools is widespread and the CSFs of good practice are varied (Bicheno and Catherwood, 2005). Tools range from complex quantitative analytical ones (e.g. Taguchi’s Design of Experiments (Taguchi et al., 2005)) to simple heuristic ones (e.g. Ishikawa’s Cause and Effect diagrams) while CSFs cover a vast array of associated issues (e.g. from leadership, culture and motivation to implementation and acceptance).
A current debate in the quality management literature focuses on which methodological approach is best, the debate specifically focuses on lean thinking (Womack and Jones, 1996), Six Sigma (Eckes, 2001) and TQM (Samson and Terziovski, 1999). Each approach’s strengths and weakness have been debated and in-depth comparisons have been made between them (Andersson et al., 2006; Arnheiter and Maleyeff, 2005; Dahlgaard and Dahlgaard-Park, 2006). It is not the intention of this paper to repeat this work, but to build upon its emerging observations which are that:

[...] the principles, concepts and tools of lean production and Six Sigma quality should not be seen as alternatives to TQM but rather as a collection of concepts and tools, which support the overall principles and aims of TQM [...] the five principles and the aim of lean production as well as the principles and tools behind the Six Sigma process are embedded in the principles, concepts and tools of the holistic management philosophy called TQM [...] It has been shown that the lean production philosophy and the Six Sigma steps are essentially the same, and both have developed from the same root – the Japanese TQM practices (Dahlgaard and Dahlgaard-Park, 2006).
[...] there is a lot to gain if organisations are able to combine these three concepts [...](Andersson et al., 2006).

 

Arnheiter and Maleyeff (2005) and Bryne et al. (2007) also endorse combining, rather than differentiating these approaches. Rich et al. (2006), Clifford (2001) and others also highlight that there is an element of marketing and faddishness to these approaches. The authors concur that there is more to be gained from combining these approaches and using their associated toolsets and CSFs together instead of trying to distinguish and divide them to discuss and use them separately.
This research differs from other recent similar surveys (Anthony and Maneesh, 2005; Ahmed and Hassan, 2003; Neergaard, 1999; Tari, 2005) because it is pitched at evaluating tools and CSFs instead of overall methodological approaches.
Methodology
An initial scoping workshop took place based on the above academic debate; it was conducted with a group of 20 currently practising quality management professionals in a one-day workshop; each professional was from a different company and the service and manufacturing sectors were evenly represented. The professionals each needed to have had at least three years direct and recent experience in quality management teaching or training and were willing to share their materials and insights in order to take part in the scoping workshop. The scoping workshop was used to design and produce a questionnaire able to give insight into some specific empirically driven research questions (detailed in the results section); it had a control section to collect information about the individual respondents (e.g. level of expertise) and their respective organisations (e.g. service or manufacturing); it also contained a list of 30 CSFs and 77 tools used in contemporary quality management practice.
The CSFs used in the questionnaire are shown in Table I; they were compiled from Feigenbaum’s (1983) (F) “19 Steps”, Ishikawa’s (1968) (I) “11 Points”, Deming’s (1986)(D) “14 Points”, Crosby’s (1979) (C) “14 Steps” and Juran’s (1992) (J) ”10 Steps”; these particular gurus were selected as they are all renowned for their comprehensive but concise lists of CSFs relating to quality management. The letter and number in the right hand column states which guru(s) it has originated from and which of their specific points it refers to (e.g. “D5” relates to Deming’s point 5).
The respondents were asked to rate the statements using a Likert scale to say:

 

. How much they “agreed” with it – defined as how significant in theory they thought the statement was (rated from 1-5 where 1 ¼ strongly disagree, 3 ¼ indifference and 5 ¼ strongly agree).

. How much “impact” each has had - defined as how well the statement has actually been implemented (rated from 1-5 where 1 ¼ very low level of implementation, 3 ¼ medium level of implementation, and 5 ¼ very high level of implementation).

 

A list of tools was also compiled from the 20 different sources provided by the scoping workshop attendees (ten commercial training courses and ten academic syllabi); all of whom were currently actively involved in using them for teaching or training purposes. During the workshop a consolidated list of tools was collectively produced and classified into the Six-Sigma DMAIC framework, which is shown in Figure 1.

Each respondent was asked to rate each of the 77 tools using a Likert scale to say: . How well they “understood” a tool rating them from 1-5 (where 1 ¼ very low level of understanding, 3 ¼ medium level of understanding and 5 ¼ very high level of understanding).

. How highly they perceived the practical “impact” of the tool rating them from 1-5 (where 1 ¼ very low impact, 3 ¼ medium level of impact and 5 ¼ very high impact).

Other open ended questions were built into a final section of the questionnaire which focused on future trends, supporting software and contextual issues encouraging respondents to comment freely on issues which were then discussed further in subsequent validation workshops.

The survey was sent to members of professional bodies who had relevant experience in quality management initiatives over the last five years. This was done by e-mailing a hyperlink to the survey to members of the Institute of Engineering and Technology (IET) and the British Standards Institute (BSI); the survey was left online for six months (June-December 2006). Each response was checked to see if the person had had adequate experience to answer the questionnaire credibly. The subsequent dataset was analysed using appropriate quality management tools (supported by the Minitab V.15 software); the analyses were debated during two one-day validation workshops each consisting of 20 different quality management professionals (each workshop had a different set of participants who were also different to the participants in the scoping study). The results are now given below which are discussed further in following sections.
Results and discussion of survey findings
There was an approximate 50 per cent completion rate (79/160) of the questionnaire and an analysis of the respondents” profile (per cent calculations are given to two significant figures) shows that over the last five years:

. 65 per cent of respondents had at least one year’s experience in quality
management;
. 78 per cent of respondents had been involved in quality improvement projects as
sponsors, managers or team members;
. 60 per cent of respondents were involved in quality management projects as part
of their every day work; and
. 85 per cent of respondents had Bachelors’ or postgraduate-degrees.

A Pareto profile of the respondents also showed that 50 per cent had predominantly worked in the UK, 30 per cent had worked in the USA and Germany and the remaining 20 per cent in other European countries and Canada. The control data collected validated the respondents’ credibility and enabled the rest of the dataset to be used for answering the empirically driven research questions below.
Question 1(a). What type of training has been the most popular over the last five years?Over the last five years Six Sigma and lean thinking approaches accounted for 45 per cent of all training. ISO and own “in-house” approaches accounted for a further 33 per cent whilst the QS9000 and European Foundation for Quality Management (EFQM) approaches did not feature highly (see Pareto analysis in Figure 2). Few claimed to have cited TQM explicitly within the last five years as their main approach to quality management.
Question 1(b), Was training perceived as being successful?
Overall, the data showed that most respondents (73 per cent) thought that initiatives had been a “partial success” (i.e. approximately half of the objectives achieved), and 26 per cent felt that initiatives had been a “complete success” (i.e. all objectives achieved) whilst only 1 per cent had not perceived any improvement at all within their programme (i.e. no objectives achieved) (Figure 3). This was a surprising result as one may have thought that everybody involved in practising quality would have claimed “complete successes” in all aspects; however it appears there is clearly some scope here

 

to make a big impact in transforming these “partial successes” into “complete successes”; the issue now is how to accomplish this, and which tools and techniques would be most appropriate to do so.
Question 2. Does the role of an individual in a quality initiative affect the way that the success of it is perceived?
The survey suggested that general management (people who were sponsoring/championing the change) are much more likely to describe their deployments as a complete success (65 per cent) than the people who were actively involved in the initiative (35 per cent). This is a statistically significant difference and can be used to infer a general relationship in the population (Fisher’s exact test p ¼ 0:016 and power approximately 0.97). This could mean that claims of complete successes may not always be substantiated directly by actual data and analyses.
Question 3(a). What is the split in practice between the service and manufacturing sectors?
The data showed that there was a fairly even split between service (51 per cent) and manufacturing deployments in our sample (49 per cent). This is surprising as it is a popular current belief that the service sector uses quality management initiatives far less than the manufacturing sector.
Question 3(b). Is there a difference between the perceived success of quality management projects in the service and manufacturing sectors?
The survey suggested that similar percentages of service (28 per cent) and manufacturing (24 per cent) deployments were perceived to be a complete success. This is also surprising as there is also a popular belief that the manufacturing sector has more successes with quality management initiatives.
Question 4(a). What is the split between strategic and operational deployments?
It was found that the deployment of quality management initiatives were evenly split between those led as part of a strategically endorsed programme (53 per cent), and those led from a bottom-up locally initiated improvement (47 per cent). Although in theory this is thought to be best practice, there has been little evidence to date to suggest this actually happens in practice and that most initiatives were actually practised as top-down deployments by senior management.
Question 4(b). Is there a difference in the method of deployment between service and manufacturing sectors?
The survey suggested that a slightly higher ratio of service deployments were strategic (59 per cent) in comparison to manufacturing deployments (48 per cent). During the validation workshops this was thought to be because the service sector typically had less layers of management that created the illusion that initiatives were always being driven from the top whereas in manufacturing many initiatives were instigated from middle or junior management layers.
Question 4(c). Does the method of deployment affect an initiatives” perceived success?This was not found to be a significant factor as both methods of deployment claimed that 26 per cent had been perceived as a “complete success”. This is interesting to note as theory concurs with this often stating that both approaches are required simultaneously for overall success of a quality programme.
Question 5(a). Which CSFs do people think are most important?
The distributions of these scores can be examined by means of the box-plots as shown in Figure 4 (the numbers on the horizontal axis relate to the CSFs detailed in Table I). The first and most important fact to note here is that all of the CSFs were rated above 3.0 (i.e. the point at which quality professional agree with the CSF being important). Therefore this validates the founding CSFs of modern quality management as being true and still currently applicable; and whilst this is commonly accepted, there is scant little evidence prior to this survey to actually support it. These results have statistically significant differences between the scores (ANOVA p ¼ 0:000) and that individual average differences are also significant (using Fisher’s Individual Error Rate ¼ 0:023). The top three highest CSFs ratings were:

(1)    The root cause of problems should be addressed not the symptoms of them (15A ¼ 4:87)
(2)    It is the responsibility of everyone in the organisation to ensure that quality is built in at the source and that reliance on later inspection is reduced (3A ¼ 4:73)
(3)    An environment that encourages the constant improvement of products and services must be developed (1A ¼ 4:72).

 

Question 5(b). Which CSFs are most effectively implemented?
A ranking of the CSFs by how well they are perceived to have been implemented is given in the box-plot (Figure 5). In contrast to Question 5(a) not all of these are rated positively; most noticeably CSF statements No. 23 (Advanced statistical tools must be used in an improvement system) and No. 10 (Slogans and fashionable jargon should

 

not be used) were not thought to be well implemented. The most effectively implemented CSFs are thought to be No. 1 (An environment that encourages the constant improvement of products and services must be developed), No. 14 (The customer requirements must be fully understood to be able to provide good quality) and No. 30 (Data must be used to support and verify the success of the improvement initiative) (for this data Fisher’s Individual Error Rate shows that differences of more than 0.034 are significant and an ANOVA gave a significant reading of p ¼ 0:000).
This perhaps means that more emphasis should be placed on training people in relatively advanced tools and only publicising initiatives which have real substance to them. Training should also seek to emphasise the importance of an inclusive environment where customers and employees share data to solve product and process issues.
Question 5(c). Is there any correlation between people agreeing with the CSFs and implementing them?
The data show that there is a strong correlation between respondents agreeing with the CSFs and how well they are implemented, this is shown in Figure 6 (note that this uses broken scales, as all the statements are agreed with albeit to a differing extent). The R-squared (adjusted) figure shows that 77 per cent of the variation in implementation can be explained by how much somebody believes in the statement. As all of the plots are within the 95 per cent prediction interval (PI) and an ANOVA of this correlation gives values of F ¼ 99:73 and p ¼ 0:000; this means that either the implementation of these CSFs relies extensively on getting the buy-in and belief of participants or that people like to believe in what they have done. Either explanation reinforces the need to engage people actively in quality initiatives if they are going to value them as a worthwhile pursuit.

 

Question 5(d). Which type of CSFs contribute most to the successful deployments?The respondents reported a mixture of completely and partially successful initiatives thus allowing us to highlight the CSFs most likely to turn a partially successful deployment into a completely successful one. Table II shows the top ten CSFs ranked from top to bottom based on the statistically significant differences between the respondents reporting complete success and partial success. Each of the top ten CSFs are also categorised as either “technical quality” or “cultural acceptance” CSFs; this is a classification that is practised by General Electric (GE (attributable to their very successful recent leader Jack Welch)) who stated that an even balance of both was required to ensure a high success rate.
The GE company, under the direction of Jack Welch became famous for effectively practising quality management and had a heuristic for is successful implementation, this was E ¼ TQ £ CA (Effectiveness ¼ Technical Quality £ Cultural Acceptance)(Andersen and Fagerhaug, 2006)). When this heuristic is applied to these CSFs (see Table II right-hand columns) it can be seen that there is a fairly even split between those that can be classified as pertaining to technical quality (TQ) and those that relate to cultural acceptance (CA) and are therefore concordant with GE’s heuristic about successful deployments needing to have a balanced split of technical and cultural aspects. In other words too much technical and statistical analysis without context can be just as ineffective as culturally sensitive programmes devoid of empirically tested data. It is encouraging to see that an even balance is currently being practised.
Question 6(a). Which tools are most widely understood?
Many respondents had either a low understanding of the tools or had not heard of them at all. The left-hand side of Table III shows the breakdown of the mean responses professed for the top ten ranked tools for “expert” respondents (Six Sigma Black Belts (BBs) and Master Black Belts (MBBs)) and “non-expert” respondents (Six Sigma Green Belts (GBs) and uncertified respondents)[1]. As might be expected the advanced statistical tools were higher up in the “expert” rankings than the “non-experts” although some key tools were noticeably absent (i.e. hypothesis testing, regression, design of experiments, non-parametric testing, sampling plans and Weibull).
This possibly means that training has had a positive impact on the uptake of advanced tools. However, there seems much to do in order to get a critical mass of employees in organisations to become familiar with a range of basic tools.
Question 6(b). Which tools do people perceive to have had most impact?
The left-hand side of Table IV shows the top ten ranking for tools by their perceived impact, this is given for “experts” and “non-experts”. Overall, it was considered that tools with a higher impact are higher up the “experts” ranking when compared to the “non-experts” ranking; whilst this may have been suspected there has been previous little work before this survey to support it.

 

 

About the Author

Ben Clegg

Ben Clegg

Ben Clegg has been a practising Quality, Project and Operations Manager, and now consults, trains, researches and teaches widely in the area. He has a BSc(Hons) in management science from Loughborough University, a PhD from De Montfort University in systems engineering. He spent a year as a visiting scholar at Stanford University (USA) researching into business simulation, organisational and operational improvement and worked for a spin-out company. He is a chartered engineer.

Attachments

  1. Clegg,Rees&Titchen-6sigma.pdf 2/18/2020 10:53:35 PM

Post a Comment