Human Performance Technology (HPT) Primer

Human Performance Technology or Performance Technology or Performance Engineering are all labels for a field of work which seeks to provide “an engineering approach to attaining desired accomplishments from human performers by determining gaps in performance and designing cost-effective and efficient interventions.”

HPT has roots in training and instructional systems, in the Human Resources field, in Environmental/Human Factors Engineering and in Organizational Development. The human performance, which HPT is concerned about, is that which accomplishes the business goals of the organization. The training world began systematic instructional design with the military training in WWII. By the 1950’s, taxonomies of learning objectives were developed and programmed instruction and cognitive psychology became significant influences in the 1960’s. By the late 60’s Performance Based Training using Instructional Technology was in practice. In 1970, Joe Harless coined the term Front-end Analysis, suggesting that many of the analysis projects he worked on would be better off if the analyses were done up front versus at the end, i.e. training had been developed but was not always solving the performance problem. By the late 1970’s Thomas Gilbert suggested methods for engineering the right kind of performance, or worthy performance. Through the 1980’s the focus on performance flourished, membership in the National Society for Performance and Instruction (NSPI) grew, and in the 1990’s business began to recognize the value of Performance Technology because of its link to business goals - the interventions suggested in the analysis were tied back to measures that mattered. Costs of interventions (even training costs) were tied back to the value of solving the problem.

Front-end Analysis (FEA), Needs Assessment, Performance Analysis - in most contexts, these mean the same thing. Their goal is to identify “performance gaps” which can be “closed” with “interventions.” To find these gaps, these analysis processes identify the current and the desired performance state, or what exists and what should exist, or actuals and optimals. The optimal set of conditions is best found by identifying Accomplished Performers (or the “Exemplar”) and observing their performance.

Causes of performance gaps fall into the following six categories, where there is a lack of:

  • Consequences, incentives or rewards
  • Data, information, and feedback
  • Environmental support, resources, and tools
  • Individual capacity
  • Motives and expectations
  • Skills and knowledge

These are the categories adopted by the International Society for Performance Improvement (ISPI) and come from the work of Tom Gilbert. Other authors cite anywhere from 3 to 11 categories, but the principle remains the same; lots of things cause performance problems (or areas for improved performance!). Once these causes are identified, appropriate interventions can be designed and implemented to close the performance gap. For instance, gaps caused by a lack of skill or knowledge could be closed with the right education/training. Selecting the right people is an intervention to close the gaps caused by a lack of “individual capacity” (physical strength, intelligence). HPT professionals may become involved in the design of interventions, both when the intervention is a training one and sometimes when they are not. For example, the design of the “selecting the right people” intervention can be seen as specifying the requirements of the individual for a job which has been analyzed with an FEA. Such an analysis would identify which characteristics mattered for producing the outputs that created success in the job. Another example of an intervention is designing feedback systems so that people know both what is expected and when they are doing things right. Sometimes this is good leadership and sometimes it may be a technological system that provides feedback to the desktop…and sometimes it’s the ergonomic system that provides better feel in the flight controls for the pilot.

When training is identified as the correct performance improvement intervention, HPT professionals employ a systematic method of designing their training program to ensure effectiveness and efficiency. The term Instructional Systems Design (ISD) refers to the broad category of models that use a “systems” approach to training design. While there are hundreds of ISD models, the generic and most commonly referred to model is a 5-phase model:

  • Analysis - identifying the end goal of the training (the performance we’re trying to effect) and the tasks and steps which comprise that performance. It includes decisions about the nature of the performance such as, Who performs? Under what conditions? With what tolerances? Which tasks should be taught and which are already known by the student? What is the best media choice (computer, classroom, video, other, some combination?)
  • Design - creating the blueprint for the instruction. Which instructional strategies will work best for this set of learners, for this kind of material? What things can we do to help learning occur, to ensure they can actually perform correctly when they get back in the field?
  • Development - writing the lesson plans, writing the programs for computer-based training (called authoring), preparing student handouts, filming the video, etc.
  • Implementation - providing the actual training, carrying out the lesson plan, maintaining course materials, etc.
  • Evaluation - determining the validity of the analysis, design, development and implementation. Did the training do what it is intended to do? Different types of evaluation will provide feedback to change or improve the instruction.

Finally, as the model is systematic, it is not linear. Part of this iterative nature is easily seen in the purpose of the evaluation phase. However, we may find by way of a pilot course (test run of a course), that some of our students just aren’t learning certain portions of the material and we may need to go back to the design phase to change the way we teach the material. Sometimes this may be as simple as just adding more examples or practice. But you may also find that some assumptions made in the analysis about the target audience (who the students will be) were incorrect and major changes need to take place. For instance, you may find that an assumption was made that the student should be experienced E-6 petty officers and in reality your students are E-5’s with little or no experience. Obviously, the training would need to change significantly if that were the case.

Job Analysis. Job Analysis or Job Task Analysis (JTA) is one of the first steps in training design and takes place in the Analysis Phase of the ISD model. It identifies the specific tasks to be performed in a given job and to what extent there needs to be training or other type of performance support to perform the job correctly. Two common methods of conducting a JTA are through the use of a survey administered to performers and their supervisors or through a facilitated panel of experts. Either way, the JTA identifies the tasks performed, each task’s importance (relative to job performance), frequency of performance, and difficulty (or complexity) to perform. The results enable training developers, by way of proven algorithms, to select tasks appropriate for training and to select the best method of providing that training. Sometimes, during this phase, tasks are determined to be best performed through the use of a job aid (a type of performance support). A job aid is a guide for performers, which eliminates the need to recall the steps of a task by memory alone. It ranges from a simple checklist to decision tables to complex algorithms. In electronic form it is sometimes called an electronic performance support system (EPSS). Examples of job aids run the gamut from preflight checklists, to CG4100 boarding forms, to the wizards in Microsoft Office.

Learning Objectives. In the design phase, specific learning objectives are developed. These statements describe what the student will be able to do upon completion of the training. We commonly refer to them as Performance Objectives, signifying that the training will lead to some performance capability. Robert Gagne (among others) developed a taxonomy of learning objectives with a hierarchy that acknowledges different levels and types of learning. For instance, learning concepts is different from learning problem solving skills, which is quite different learning than how one learns psychomotor skills such as hitting a baseball. The relevance of different types and levels of learning is that we should develop and deliver our training in ways that support that type of learning. Although lecture is still the most common form of training medium, it is often ineffective and usually inefficient. Other typical training methods include:

  • Demonstration - a method where initially the instructor, or a student under an instructor's guidance, shows how the performance in the performance objective is correctly done. This method employs drill and coaching. It is effective with smaller groups because of the involvement of students in the learning process.
  • Discussion - a method characterized by 2-way communication, immediate feedback and peer interaction. Instructors serve as facilitators, mediators, mentors or "devil's advocates."
  • Role play - this method provides a high level of student involvement. It allows students to experience scenarios with varied inputs and outcomes. Though written, structured, and controlled by instructors, students provide much of the stimuli.
  • Case study - a method that provides students with a set of particular facts or representations to which they must apply their knowledge, experience and judgement to reach a solution.
  • Simulation - an instructional method where students practice new skills in a realistic environment, but one that affords little or no consequences for incorrect actions. In this "safe" environment students are free to err and learn from their mistakes.
  • Hands on exercises - a phrase rather than a specific instructional method. Normally characterized by students applying practical applications to training received through earlier methods, particularly when the learning objective has a strong psychomotor element. Settings include labs, simulators, training platforms (e.g. boarding platforms, 20' shipping containers, Aids to Navigation buoy farm).

Alternative development. More and more HPT professionals are gravitating toward what is now commonly referred to as “alternative development” or “alternative delivery” forms of training. These rely on advances in technology and just as importantly (if not more) instructional designs which capitalize on these advances. Generally, these methods have very high up-front development costs, but eliminate the need for high infrastructure needs, so the return on investment is attractive. The most common of these include:

  • Computer-based training (also called Interactive Courseware - ICW)
  • Interactive Video Teletraining
  • Web-based delivery (interactive training over the World-Wide Web, over an Intranet (own organization’s web), or Extranet (two or more organizations’ wide-area network))
  • Simulators (ranging from low to high fidelity)
  • Video training packages
  • Embedded performance support (usually in computer applications; such as context sensitive help, tutorials when you need them, etc.)

Evaluation. One of the most common ways trainers refer to evaluation is through, “Kirkpatrick’s Four Levels of Evaluation.”

Level I Evaluation - Reaction. This type of evaluation measures a learner’s reaction to the training, i.e. how did they feel about it? A course critique is commonly used to measure learners’ reaction.

Level II Evaluation - Learning. This is a fancy term for a test. It measures the learner’s mastery of the appropriate skill or knowledge. More specifically, was the learning objective achieved? In training (as opposed to education), learning objectives are usually skill-oriented and as such, the best measure is a performance-based measure, i.e. can the students do something now that they presumably couldn’t do before the training?

Level III Evaluation - Behavior. Just because a student can demonstrate increased knowledge or skill is no guarantee that behavior on the job will actually change. It attempts to measure actual on the job behavior to see if there was a transfer of skills from the classroom to the working environment. A survey of the performers and their supervisors, or actual observation by trainers of performance on the job usually six months after completion of the course is referred to as an external evaluation. This type of evaluation will validate whether training was the correct performance intervention to begin with, but falls short of being able to tie that performance to organizational goals.

Level IV Evaluation - Results. This type of evaluation is the most important, yet the least conducted, primarily because of its complexity. It attempts to answer the question of whether the organization’s goals were achieved as a result of training of the performers. Again, increased skill and even desired behavior on the job do not guarantee desired organizational results. Performing this type of evaluation requires an alignment of organization goals tied to specific, measurable performance. This type of evaluation will validate whether training was the correct performance intervention to begin with.

Training Societies. The two most widely known professional societies that cater to HPT clientele are the American Society of Training and Development (ASTD) and the International Society for Performance Improvement (ISPI), formerly called the National Society for Performance and Instruction (NSPI). Some additional societies which contribute to learning about performance interventions include:

  • Academy of Management
  • Association for Education & Communications Technology (AECT)
  • Human Factors Society
  • Human Resource Planning Society
  • Society for Industrial/Organizational Psychology
  • Society for Applied Learning Technology (SALT)
  • Society for Human Resource Management (SHRM)

Training Journals. The following are the most popular trade journals:

  • P&I (published by ISPI)
  • Performance Improvement Quarterly (also by ISPI)
  • Training Magazine (by Lakewood Publications)
  • Training and Development (by ASTD)

Training Gurus. The following are perhaps the most widely known pioneers and practitioners of Human Performance Technology:

  • Tom Gilbert (father of HPT, “Human Competence: Engineering Worthy Performance”)
  • Joe Harless (coined term, Front-end Analysis, originator of FEA workshops, Job Aid workshops, recently retired. Past-president of ISPI and elected to HRD Hall of Fame. Workshop materials now owned by HPT Inc., Dr. Paul Elliott)
  • Robert Mager (known for “3 part objectives”, “What every manager should know about training” and the “Mager six-pack” (set of paperbacks on training))
  • Allison Rossett (San Diego State, “Training Needs Assessment”)
  • Geary Rummler (“Improving Performance How to manage the white space on the organizational chart.” and “Human performance systems”)
  • Dean Spitzer (principal of Boise State Performance & Instructional Technology Master’s Program, delivered via the internet, “Super Motivation”)
  • Roger Kaufman (Florida State, “Needs Assessment: Concept and Application”)
  • Robert Gagne (Florida State, now retired, wrote, “Conditions of Learning”, “Gagne’s Nine Instructional Events”)
  • David Jonassen (Penn State, wrote, “Handbook of Task Analysis”)
  • Peter Dean (Senior Fellow, Wharton School, professor at University of Tennessee, Editor of the Performance Improvement Quarterly and member of the Human Performance Technology Institute Faculty, author of “Performance Engineering at Work”)
  • Gloria Geary (author of books on CBT and Electronic Performance Support Systems)
  • Jack Phillips (author of best selling ASTD book, “Measuring the Effectiveness of Training”)
  • Frank Dwyer (Penn State, past-president of AECT, author of “Visualized Instruction”)
  • David Merrill (Utah State, known for “Component Display Theory” and “ID2”)
  • Charlie Reigeluth (Indiana Univ., editor of “Theories of Instructional Design” and known for “Elaboration Theory of Instruction”)
  • Stolovich & Keeps (editors of “Handbook of Human Performance Technology”)
  • Marc Rosenberg (AT&T, “Performance Technology: Working the system”)
  • Danny Langdon (principal of Performance International, Inc., “The new language of work”)
  • Ruth Colvin Clark (principal of Center for Performance Technology)
  • John Keller (Florida State, known for Motivation Theory in Instructional Design)
  • Robinson, D. C., & Robinson, J. C. (“Training for impact” and “Performance Consulting”)