This refers to the process of uncovering the structure of decision making that can be uncovered through careful examination of human performance on a particular task. This structure provides the skeleton of behavior that can be used to constrain extrapolations to new domains, or provide the high-level behavior for an intelligent agent that's expected to behave in a human-like fashion.
Data Collection is perhaps one of the most important aspects of Task Analysis -- without detailed data, it's very difficult to get an accurate picture of behavior. When working with virtual environments, this commonly involves "instrumenting" the environment through adding data collection routines directly to the code (as we did with Unreal Tournament) or through capturing it via a pass-through layer (as we did with dTank). In virtual environments, we typically employ eye-tracking, and also record a complete stream of interaction events (or sampling for continuous controls). In many task environments we also conduct interviews and collect either on-line or retrospective task protocols. The key, though, is a detailed enough picture to understand both the cues a person is responding to and the actions they take.
Given a detailed data set, the next phase involves careful analysis of the recorded data.
This analysis often involves identifying a set of metrics for the human
behavior that will allow us to later produce a "fit" of the
corresponding Cognitive Model to the demonstrated human behavior as a
check on our work. The end result is a comprehensive picture of the human performance on the initial task for later use in modeling.