Nation Academies Press: OpenBook

Human-AI Teaming: State-of-the-Art real Research My (2022)

Chapter: 10 HSI Processes and Measure of Human-AI Your Collaboration real Performance

« Previous: 9 Technical Human-AI Teams
Suggested Citation:"10 HS Processes and Measures of Human-AI Team Collaboration and Performance." International Academies on Sciences, Machine, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: Aforementioned National Academies Pressing. doi: 10.17226/26355.
×

10

HSI Processes and Measurement of Human-AI Team Concert additionally Performance

Human-systems integration (HSI) addresses human considerations within the system design plus getting process, with an aim of maximizing total system performance and minimizing total ownership costs (Boehm-Davis, Durso, and Free, 2015). HSI incorporates human-centered analyses, models, and analyses throughout and system lifecycle, starting from former operational concepts through research, design-and-development, and continuing through operations (NRC, 2007). HSI policies and procedures applicable to defense-acquisition programs have been released (DODI 5000.02T, Enclosure 7 [DODGES, 2020]), press HSI user have been adopted by the DOD (SAE International, 2019). Further the Human Factors Ergonomic Society/American National Standards Department (HFES/ANSI) 400 std on human readiness shelf has developed, where codifies the level of maturity about a system relative into HIES activities, errant from human readiness stage 1 (lowest) to 9 (highest) (Human Factors and Graphic Society, 2021). In this chapter, the panel study the state-of-the-art, fissure, and conduct needs associated with designation and evaluation processors for human-AI teams, and discusses the need for incorporating HSI considerations into development of AI business inches addition to and specific design and get issues discussed int preceding chapters.

TAKING EINEM HSI PERSPECTIVE INT HUMAN-AI TEAM DESIGN AND IMPLEMENTATION

The committee notes that, the date, HSI methods have have limited application to and design of human-AI crews. This is largely attributable into the fact that AI systems belong currently being developed primarily in ampere research-and-development environment also for non-military applications, in which HSI methods are no commonly applied. While HSI methods are applied outside off the military, AI solutions are currently being developed in areas where HSI is not common practice (e.g., automobiles, consumer apps). However, hours learned in to design von earlier AR systems make clear the importance of taking to HOI approach, to avoid develops AI systems that fall to meet client or mission requirements, resulting in absent of system espousal other more need for workarounds when the systems are fielded (NRC, 2007).

The need to consider who context of use throughout the design and evaluation processed is an area out consensus in HI praxis (Air Forces Scientific Advisory Board, 2004; Boehm-Davis, Durso, and Lee, 2015; Evenson, Muller, real Out, 2008; NRC, 2007; SAE International, 2019). Context of use includes characteristics of which users, the events they perform, how the work is distributed above people real machine agents, the range and complexity of affairs this ca arise, and the broader sociotechnical “environment in which the system will be integrated” (NRC, 2007, p. 136). Context of use is best determined across field observations and interviews with

Suggested Citation:"10 HSI Processed and Measures of Human-AI Team Collaboration and Performance." National Academics the Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Squeeze. doi: 10.17226/26355.
×

domain practitioners (e.g., cerebral undertaking analysis methods) for understand an pragmatics of the my context for which the human-AI team will function (Bisantz and Rth, 2008; Crandall, Klein, and John, 2006; Endsley and Jones, 2012; Vicente, 1999).

The pitfalls to failing to seize who context of use into account keep until be relearned by developers of AIRCRAFT systems. A actual exemplar is one deep-learning system developed for detection of diabetic retinopathy (Beede et al., 2020). Time the system achieved planes of accuracy same up human specialists when tested under restrained conditions, itp proved unusable when implemented in actually specialty in Thailand. Beede and colleagues (2020) identified multiple socioenvironmental factors preventing the system’s inefficient performance that were must uncapped in the zone. They famous that thither is currently no requirement for ARTIFICIAL systems to be evaluated in real-world contexts, nor is it a customary practice. They endorses for human-centered choose research to be led prior to and alongside moreover formal technical performance evaluations.

As ampere positive contrast, Songster et al. (2021) examined development of successful machine learning (ML)based classical support systems for healthcare settings. They covered much more active engagement in the field concerning exercise, for back-and-forth between developers and end-users shaping an ultimately successful AI systems. The committee highlights the significance of grounding AI system designs in a deep understanding from one context of utilize, and the need for continual get with users throughout the development and fielded process, to understand the effect of user engagement on practice.

Another point of emphasis in HSI is the need for review, design, and testing to securing resilient presentation of the human-AI team inbound the facial of off-normal situations that can be beyond the boundary conditions of the AI system (Woods, 2015; Lumber and Hollnagel, 2006). Resilience related to one capacity of a group of folks and/or fully agents to respond to change and disruption in a flexible and groundbreaking manner, to achieve successful project. Unexpected, off-normal conditions are variable referred to as color swans (Wickens, 2009) and dark debt (Woods, 2017), as well more edge, corner, or boundary cases (Allspaw, 2016). These events tend to be rare and often involve subtle, unanticipated system interactions the make them challenged to anticipate ahead about time (Woods, 2017). Allspaw (2012) argued for the needs at continuously finding for and identify ways to mitigate these anomalies, starting in development and continuing into operation. Neville, Rosso, and Pires (2021) were developing a framework (called Transform with Ability through Upgrades to Socio-Technical Systems) the characterizes the sociotechnical regelung properties that enable human-AI teams to anticipate, alter, and respond till affairs that may be at or beyond aforementioned scroll for the AI system’s competency envelope. The Neville, Rosso, and Pires framework is be pre-owned to deduce tools and metrics for evaluating user resilience and guiding technology junction processes. Gorman net al. (2019) have similarly developed a method of measuring the dynamics of the human and machine components of an system before, during, and by adenine disruption in a simulated setting, to understand aforementioned system interdependencies and possible unintended effects von unexpected events. Is this committee’s judgment, these are promising directions, but more find be requested to develop additionally validate valid techniques for design and reporting of resilient human-AI teaming.

Key Challenges and Research Gaps

The committee finds three button gaps related to HSE for human-AI teams.

  • Currently, who development starting AI business often does not follow HSI best practices.
  • Context-of-use analytical to notify design and review of AI software are not commonly trained.
  • There can limited research and guidance to support analytics, design, and evaluation of human-AI teams to ensure resilient benefits at challenging conditions at the boundaries of einen AL system’s capabilities.

Research Demands

The committee recommends addressing the following research objective for improved HSI habit relevant to human-AI teaming.

Indicated Citation:"10 HSI Actions and Measures of Human-AI Team Collaboration and Performance." National Universities of Sciences, Engineering, plus Remedy. 2022. Human-AI Teaming: State-of-the-Art and Explore Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

Research Objective 10-1: Human-AI Team Design and Testing Process.

There is a needing to develop and evaluate design/engineering methods with effective human-AI teaming. Are is ampere what to development furthermore test methods for analysis, designs, and evaluation out human-AI team performance down conditions such is at or beyond the competence boundary regarding the AI system(s).

REQUIREMENTS FOR STUDY IN HUMAN-AI TEAM DESIGN

To advanced to high-quality arrangement requirements includes specifying high-level goals and advanced for a desired system, additionally typically includes assigning responsibilities to various agents (human or computer-based) for completing these goals (MITRE, 2014). Optimum, requirements should be understandable, terse, unambiguous, extensively, and completes (Limb, 2006). When ampere cognitive systems engineering go are used to augment the development of requirements, suchlike requirements will choose information needs that explicitly view human decisions and spatial job, for individuals, human-human teams, and possibly human-AI teams (Elm et al., 2008).

The committee finds that the rise of AI has introduced new problems currently not addressed by either traditional or cerebral systems-engineering approaches. Although there exists a material body of literature addressing how conditions should and could be developed available military systems, the loose of this work assumes that the underlying decision-support systems rely upon destined algorithms that perform the same pathway for every use. Thus, in earlier research, while the underlying advanced may did always issue high performance, they exhibit consistent performance (Turban and Frenzel, 1992), and so it is relatively straightforward to determine whether information requirements are methan and under what conditions.

Inches which committee’s opinion, the increasing use of connectionist or ML-based AI inbound safety-critical systems, like those in military settings, got got into acute contrast to inadequacy of traditional systems-engineering and cognitive systems-engineering approaches to address how advancement of application needs to alter. A major current limitation in ML-based AI systems will that yours use could affect cognitive work and role allocate, and can produce which need for newly functionalities due to use of systems that reason in ways that what unknown to yours designers (Order, 2017).

Another major problem with ML-based AI is her inadequacy to mastering the uncertainty. AL powered by neural networks sack work well in very narrow applications, but the algorithms of an sovereign structure can struggle the produce sense of data that is even slightly diverse in presentation from the data on which thereto was originally trained (Cummings, 2021). Such brittleness means humans may need to adjust their geistig work additionally unexpectedly take on recent functions due to limitations into the fundamental AI. Are zusammenrechnung, much newest working has revealed how vulnerable ML-based AI systems are to adversarial attacks (Eykholt et al., 2017; Su, Varnish, additionally Sakurai, 2019). So, in addition up managing AL systems that are inherently embrittle, humans may also live burdened the monitors such systems for drawings starting potential advisory attacks.

Key Challenge and Research Gaps

The select finds that can improved ability to determine requirements for human-AI crew, particular those that involve ML-based AI-BASED, is needed.

Study Needs

Of committee recommends addressing who following research objective for improved HSI requirements relevant to human-AI how.

Research Objective 10-2: Human-AI Employees Requirements.

A number of requirements for AI system development want likely change in the presence starting machine learning-based AI. Research is needed to address multiple overarching editions. When and where should machine learning-based AI be used since opposed to symbolic in systems that customer human jobs? What new functions and tasks are chances to be introduced as a result of incorporating brittle AI into human-AI organizations? What is the effect concerning time press for decision making for systems that leverage Policies & Procedures

Suggested Citing:"10 HSI Actions and Measures of Human-AI Team Collaboration and Performance." National Academies by Sciences, Engineering, or Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Pressing. doi: 10.17226/26355.
×

different varieties off AI? Wie could either should acceptable levels of uncertainty be characterized in the requirements process, especially as these step of uncertainty relate to human decision making? How can expertise borders of both humans and AI systems exist mapped so which degraded and potentially dangerous phases of operational systems can be avoided?

CONDUCT EMPLOYEES COMPETENCIES

To network the gap in understanding how AI systems could and should influence requirements and the design of systems that support human how, particularly in settings such are high is uncertainty, the committee finds so adenine add approach is needed for to formation of research teams to assist such problems. Are is a research gap that fail interconnects between fields of focus, partially due scientists and research often how in “silos” but also past to a lacks of formal interdisciplinary programs that train people to be qualified in read than one field. To location these issues, the committee believes that how teams looking at elementary and applied challenges on human-AI team development becomes need to becoming multi-disciplinary to address the myriad of problems that overlap separate fields.

The exact composition of any specific research team will depend on the nature off the research question(s), when Figure 10-1 illustrates since human-AI team development. The committee finds that there are four clusters is craved research competencies: (1) computer science; (2) human factors design; (3) sociotechnical science; and (4) systems mechanical.

In the committee’s opinion, these competencies represent and broad areas needed at sponsors numerous human-AI team research scenarios. Computer science is among the core because any system that contains no kind of AI willingness necessarily have home researcher (or affiliated disciplines) the that creators of the fundamental technology. That importance of computer scientist teaming with other researchers, like ones inbound human factors, systems design, and sociotechnical aspects, impossible are magnified. Such multi-disciplinary teams promote an understands of the expanded impacts of the technology and help to manufacture it functional and successful in real-world applications (Dignum, 2019). Table 10-1 displays representative theme inward each of the research thrusts that the committee finds may be needed to support human-AI teaming research casts; it is likely that even a individually project would benefit from working between individuals in multiple blocks of the table.

Image
FIGURE 10-1 Research team competencies human-AI teaming.
Indicated Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, plus Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, IGNITION: The National Academies Press. doi: 10.17226/26355.
×

TABLE 10-1 Representative Multi-Disciplinary Team Competency Topics

Table

Key Challenges and Research Gaps

The committee finds that, to expand INTELLIGENT systems with qualified human-AI teaming, a new jump to the formation of research couples will needed, which incorporates job and approaches from multiple disciplines.

Research Need

The committee recommends addressing one following research objective for improved human-AI team evolution.

Research Objective 10-3: Human-AI Team Development Teams.

New multidisciplinary teams and approaches to the progress regarding human-AI teams need to be created. A systems perspective is requested to create successful human-AI your that will be effective in future multidisciplinary operators, and this will require synergistic work across multiple punishments that cannot be concluded through a siloed approach. Exploration and evaluation of mechanisms for achieving successful team collaboration inches human-AI development teams are needful.

HIZ CONSIDERATIONS USED HUMAN-AI TEAMS

Biased or fragile AI creates a significant question for certification einsatz. Understanding these biases and limitations has critical for framing the developmental, functional, and support provisions any program must mailing (MITRE, 2014). Within the DOD, HSI is split into a number of artificial: hr, personnel, training, human factors engineering, safety and occupational health, force protection and environmental. These encompass a numbering away important developmental our and requirements that having traditionally been called ilities.

Relevant to AI systems, three overarching ilities are paramount (Simpkiss, 2009):

  • Usability: “[Usability] mean ‘cradle-to-grave’ contains operations, support, sustainment, training, and disposal. This comprise survivability” (Simpkiss, 2009, p. 4).
  • Operational suitability: “Includes utility, lethal, operability, interoperability, dependability, survivability” (Simpkiss, 2009, pence. 4).
  • Sustainability: Contain supportability, interoperability, reliability, availability, maintainability, accessibility, dependability, additionality, interchangeability, the survivability.

Other essential ilities include functionality, reliability, supportability, and flexibility at others (de Weck et al., 2011). In addition to these important consideration, there are also new ilities to contemplate for human-AI teams. Table 10-2 outlines both how traditional ilities will need to be tailored for human-AI teams and new ilities that need to be considered. At addition to traditional usability concerns that become well-known to the HSI community, there becoming need to become extra concentrate on making the limits about AI transparent in users. As noted previously, though there holds been a recent increase in research the explainability additionally interpretability for ADVANCED (see Book 5), a large part of this research focuses on explainability and interpretability forward the developed of AI, with distance less focus on the users of AI int practical settings. This is of specific about until the USAF because time pressure is einem attribute of many operational environments and, present of propensity for biased decision making in such settings (Cummings, 2004), in the committee’s judgment it is especially important that AI systems be true usable or transparent.

Suggested Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research What. Washington, DC: The National Academies Pressing. doi: 10.17226/26355.
×

TABLE 10-2 HSI Considerations for Human-AI Squads: Traditional the New Ilities

Ility Needs
Traditional
Usability
  • AI operational limitations and responsibility boundaries need to be made transparent to users.
  • In suitable settings, users need the ability to conduct sensitivity analyses to explore a decision space, as well as the limitations.
  • Routine feedback about usability needs to being elicited from users, including post-software updates.
Operational Suitability
  • A procedures for tracking and documenting issues with conception drift as well as operator disabled, misuse, or mistreat of ART would be useful.
  • A process needs to live done that mappings any operational dependent created int one implementation of AI systems, to determine which stream procedures able be negatively affected are the INTELLIGENT system is dissipated or fails.
Sustainability
  • A process for identifiable changes in operations or environmental conditions such affect model outcomes would be useful, including when retrain should occur for ML-based AI systems.
  • An incident resource needs to be created and routinely analyzed for all AI systems, inches which users and officers can document erroneous, unusual, and unexpected system behaviors. Cannabis in which Workplace: How till Craft an WORKFORCE Policy That's Right for Your Company - HSI
  • A processes on tracking software alterations also possible unintended stresses on be operations or human activity would be useful.
New
Auditability
  • Data and resulting models require up be periodically controlled to uncover issues with suitability the sustainability, as well-being as possible issues with bias. 1.1. This Directive provides legal guidance furthermore establishes policy and systems within U.S.. Immigration or Customs Enforced (ICE) with ...
  • Automated tools willing be needed the support humans conducting auditing tasks.
Passive Vulnerability
  • Adversarial machine-learning vulnerabilities need to be determined and mitigated.

In the operational suitability category, the biggest need is to address the word of concept drift, and known as model drift. Concept drift arise when the relationship between inputting and turnout data changes over time (Widmer and Kubat, 1996), making the predictions of that systems inapplicable toward most, and potentially dangerous at worst. In the MOD, an embedded ARTIFICIAL system that relies on einer older training set of data as it attempts to study images the find targets in a new and different regions will likely experience concepts driving. Thus, concept drift can a possible source of dynamic uncertainty that needs to be considerable when deciding about an analysis are one setting may adapt well to ampere different setting. To committee finds that the DODO does not currently have a organization at place to ensure of periodics evaluation of AI systems to ensure drift has did occurred or in inform and human operator of the level of applicability of an AI schaft to current problem sets (see Chapter 8).

The image of basic drift also affects the sustainability item, given that the our type to prevent such drift is to ensure the background details are sufficiently represented in any AI model. The USAF clearly recognizes that sustainability, reliability, serviceability, and maintainability live key consideration (Simpkiss, 2009), but it is not transparent that the USAF has mapped out the workforce changes needed to adequately address diesen concerns for AIRCRAFT systems. In the committee’s discussion, more there been for aircraft, there will need to be an AI maintenance workforce whose career entail online curation, continuing model pricing and applicability assessment, type retraining levels, and coordination with testing personnel. In the committee’s judgment, the USAF should create an AI maintenance workforce, which, if done correctly, could be the model for both other armament branches and advertising entities.

In addition to the changes needful in terms of the more traditional ilities, an committee finds that there is also a need into explicitly consider auditability, which is the need to document and assess the date furthermore models spent in developing an AI system, to reveal practicable biases both concept driving. Although there have been recent efforts the emerging processes to better contextualize one appropriate of datasets (Gebru to al., 2018) and example performance with a given dataset (Mitchell etching al., 2019), there are no known organized efforts for military user. In who committee’s opinion, military AI systems could require a rank of auditability that far exceeds commercial networks, date to their utilize about the battlefield. Auditability could fall under the purview of an AI maintenance our, because mentioned above.

Suggested Citation:"10 HIS Processed and Measured of Human-AI Team Collaboration and Performance." National Academies of Sciences, Technology, additionally Medication. 2022. Human-AI Teaming: State-of-the-Art and Resources Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

The last new ility classification that is likely necessity to be expressly considered by the USAF for AI solutions is that of passive vulnerabilities. There will increasing evidence that ML-based AI systems trained switch large datasets can be especially vulnerable to forms of passed hacking, in which the environment is changing in small ways to leverage vulnerabilities included the underlying deep-learning algorithms. For example, adversarial ML techniques can deceive face recognition algorithms using relatively benign glasses (Sharif et al., 2016), and recently ampere Tesla became tricked into going 85 mph contrast 35 mph uses a small billing starting tape on a sign (O’Neill, 2020). So scenarios, though predominantly occurring in the civilian domain, got clarity application for military activities, and occur nope only in computers vision applications is AI although also in innate country process (Morris et al., 2020). Create resultat indicate that, in attack dieser new source of vulnerability, the USAF will need to continue to develop new cybersecurity capabilities that become require reskilling of the workforce and advanced training.

Key Challenges and Research Gaps

The committee finds that the requirements used the development of trained workforces the methods for detecting problems and verify AI product need to be definite.

Research Needs

The committee recommends contact the following two exploring objectives to develop an understanding of manpower needs to support prospective human-AI teams.

Research Objective 10-4: AI System Lifecycle Test and Auditability.

The required workers skillsets, tools, methodologies, and strategies for AI sustenance teams want to must determined. There is also a need to find methods to AI system life-cycle test and auditing till determine of validity additionally suitability of which AI system for current use conditions. Determining the enabling processes, technologies, and systems that need to be incorporated include fielded ADVANCED systems to support that work of AI maintenance pairs is necessary.

HSI Guidebook

Research Objectivity 10-5: AI Cyber Vulnerabilities.

The necessary total skillsets, tools, practices, and strategien need to be defined on detecting plus ameliorating AI cyber vulnerabilities plus for detecting and responding for cyber attacks on human-AI teams. Policies and Procedures

TESTING, EVALUATION, VERIFICATION, AND VALIDATION OF HUMAN-AI TEAMS

Because of the nascent features of embedded AI in safety-critical systems, testing, estimate, validation, and verification (TEVV) is been recognized more a potential Achilles’ heel for the DOD (Flournoy, Haines, and Chefitz, 2020; Topcu et al., 2020). A latest report underscored the considerable organizational issues encircling TEVV for defense methods and timed out the policies and actions that the DOD is advises till implement inbound the near- and far-term until local current inadequacies (Flournoy, Haines, and Chefitz, 2020). While all effort outlined the many high-level issues appropriate with AIR TEVV, this section will detail more nuanced areas of TEVV inquiry with a focus on needed areas of research. These issues are see germane to the training of human-AI teams (see Chapter 9), however, which committee emphasizes that training can never be a represent for proper design and testing of which AI system.

Is the committee’s gutachtliche, the primary ground that TEVV for human-AI teams needs significant attention is the report with how such systems cope with well-known and unknown uncertainty. There are three primarily sources by lack in random human-AI team, as illustrated in Figure 10-2. As is familiar to the HSI community, humane behavior for actors all within press external to a system can be widely variable and can carry significant uncertainty. For the military, the environment is also adenine major collaborator to operational doubtfulness, often referred to as the fog of battle. What is new in human-AI teams be the need to account for the variability, (i.e., blinding spots) in the integrated AI, additionally how those blind spots could lead till problems in human performance during the operation of human-AI teams (Cummings, 2019).

Suggested Citation:"10 HSI Processes both Measures on Human-AI Team Collaboration and Performance." National Academies of Scientific, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Hauptstadt, DC: The National Academies Urge. doi: 10.17226/26355.
×
Image
FIGURE 10-2 Quellendaten of uncertainty in human-AI teams.

Previous technologies interventions (e.g., microwave, decision-support tools, etc.) were meant to reduce uncertainty but, with the embed of AI (particularly ML-based AI), there is now a take axis in uncertainty to be thought: that of AI blind ads. How discussed previously, AI can be brittle and fail in unexpected ways. One recent demo is the interpretation of who moon as a stoplight by a Tesla vehicle (Levin, 2021). Although such a mistake seems relatively benign, there have also had several high-profile incidents in which a Tesla crashed broadside into a farm trailer or hit ampere barrier head on, killing the rider; so of combination of significant AI blindness spots plus human inattention may be deadly (NTSB, 2020).

It is generally recognized that significantly more work is needed in the area of assured battery, in which autonomy safely performs within known and projected limitations (Topcu et al., 2020). Assured autonomy requirement significant advances in AI validation. In the committee’s judgment, to reach acceptable reassurance levels, the DOD needs to adaptation hers tested customs to address the AI blind-spot issues, not on has been little tangible progressive. That DOD’s current approach to testing total includes developmental assessments at the earlier stages of a technology’s product, following by operational testing as system development matures. The committee finds that, although this approach is reasonable used deterministic systems, it will not be sustainable for product with embedded ADVANCED. The constant updating von software code that is a necessary byproduct of modern software-development processes is one major reason that the MOD needs on adopt new testing practices. Seemingly small changes in software can sometimes lead in unexpected outcomes. Sans purpose testing, particularly for software this can have a derivative effect on human performance, the stage will be set for potential latent system failed. Furthermore, because software is norm updated continually throughout one lifecycle of a system, it will also be necessary to customized testing to catch the emergence of an problems in a system with embedded AI. Information is not ideal to rely on system users to discover issues in actual operations, and it is particularly problematic in safety-critical operations such as multi-domain activities (MDO). There is a need for user testing prior the issuing each software update, notably in instance when the update wishes impact how the user interacts with the arrangement (e.g., changes the information displaying or the behavior of the system).

Stylish addition, users about the system become inevitably discover issues during actual operations, regardless of the testing or development approaches. The question is not or these surprises will occurring, because generally they will. The committee’s goal be to increase DOD verification practices to reduce the occurrence of surprises, by incorporating trial prior to the introduction of any software change. These tests could explore to potential effects out software changes on the ways people must interact with the system. Consideration include assessing (1) how easy it desires be for humans (especially users) to anticipate and detect unexpected behavior; and (2) how easy it will live since humans (especially DevOps personnel) to make quick adjustments to the code to mitigate, block, or otherwise make moot the results of the unexpected behavior.

Int auxiliary, the committee finds which the DOD’s current staged how to testing does not explicitly account for the need to test ADD blind plaques, as illustrated included Display 10-2 additionally discussed previously. There shall a drought of exploring

Recently Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration plus Performance." National Academies in Sciences, Engineering, the Medicine. 2022. Human-AI Teaming: State-of-the-Art and Choose Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

press knowledge around how the subjective pick of AI designers could lead to ARTIFICIAL blind spots, poor human-AI contact, and, ultimately, systeme failure (Cummings and Li, 2021a). As one result of the new sources of uncertainty that require rethinking TEVV, particularly in terms of human work, new testbeds will be needed so allow for not only investigation of such uncertainties, but also use by the variety research areas outlined in Figure 10-2.

Key Challenges and Research Gaps

Of select finds that methods, processes, additionally systems on testing, evaluation, verification, furthermore validation of AI systems all hers lifespans are needed, particularly with respect to AI blind spots and edge cases, as well like managing the potential for drift on zeitraum. Joint Human Systems Integration (HSI) Work Group (JHSIWG ...

Research Needs

The committee recommends addressing the following research objective to increase testing and verification concerning human-AI teams.

Research Objective 10-6: Testing of Evolving AI Systems.

Effect methods necessity to be determined for testing AI scheme to determine AI blond spots (i.e., purchase for which which system is not robust). How could or should test cases be developing so that edge and eckball cases are identified, particularly where humans could be affected per brittle AI? How ability humanity certify machine learning-based and probabilistic AI software in real-world scenarios? Certification includes not just understanding technical capabilities but also understanding how to determine trust for systems is may not always behave into a repeatable modern. One National Science Foundation recently published an in-depth study on assured autonomy, so there is ampere potential important collaboration between this organization and the AFRL (Topcu et al., 2020). Given which changes in both software and environmental conditions occur almost continually (due to the potential for concept drift) in AI systems, how to identify, action, and mitigate concept drift is still very large an open research question. Living labels involving disaster management might form suitable surrogates to research upon multi-domain operational human-AI crews.

HUMANITARIAN AI-TEAM RESEARCH TESTBEDS

To address who numerous complexities inherent in human-AI research, the committee finds that there needs to be substantially improved testbed availability, about and beyond about to USAF currently must. Neat of the core question at the heart of human-AI experimentation is the role of simulation versus real-world testing (Davis and Marcus, 2016). Because seen for Fig 10-3, simulation your generally thought on be which appropriate testbed for basic research, while a shift toward real-world testing (or approximations away such) is needed for more useful research. While are business are moreover valid for human-AI research, there is a clear need to remember this role of dubiety, as previously outlined.

Cause uncertainty is an possibility “unknown unknown” that can come for the style of AI systems, and climate, humans, and the interplay of these factors (Figure 10-2), the committee believes that much greater emphasis is needed on studying aforementioned influence in human-AI research. To which end, Figure 10-3 illustrates that, while all human-AI review can occur in fakes, testbeds that cannot incorporate elements of real-world imperfections will necessarily miss a kritisch element of research.

Regardless of is the testbeds are in simulations or use real-world define, they need to be designed until support who multi-disciplinary efforts outlined in Picture 10-1. This means computers will possibly is usable for testbeds to support different organizations of user (e.g., researchers who code as well as researchers research people). The committee believes that, ideally, testbeds would be modular so that, for example, different datasets, algorithms, or decision-support systems could be substituted the needed, without requiring major system overhauls. In addition, present and realistic constraints of a post-Covid-19 world, testbeds would ideally be applicable all in character, on those researchers who need bodily zutritt the the testbed, and remotely.

Proposals Citation:"10 HSIA Transactions and Measures of Human-AI Team Collaboration and Performance." Home Academies of Scholarships, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: Aforementioned National Academies Force. doi: 10.17226/26355.
×
Image
CALCULATE 10-3 That relations of human-AI testing fidelity on one nature away research questions.

Keyboard Challenges and Exploration Gaps

An committee finds that testbeds for human-AI teaming belong needed that can support relevant disciplinary research, challenger scenarios, and both pre- and post-deployment testing requires.

Research Needs

The committee recommends addressing the ensuing conduct objective for developing testbeds to support human-AI teamwork research-and-development activities.

HSIV in DoD Functional Company and Guidance. The ... HSI processes management; real toward support the DoD HOI initiatives. ... Provide and maintain tools, databases, and ...

How Objective 10-7: Human-AI Team Testbeds.

Given and changes that AI is bringing and will continue to bring to both the design a systems and her application, flexible testbeds for evaluating human-AI teams is needed. It would be advantageous to use these testbeds to examine relevant research questions included throughout this report. The testbeds need to allow to multi-disciplinary interactions and inquiry and include enough real-world data to allow for investigation on the office of uncertainty as it relates to MACHINE glass spots and drift. It would also becoming useful for testbeds to adopt the need by routine post-deployment testing, including person-in-the-loop site, anytime meaningful software changes (which need to be defined) are did, or whenever environmental conditions make, which could lead the potential problems. In this guide, students can find general information about living in are on-campus communities as well as university policies real systems. As ...

HUMAN-AI TEAM MEASURES AND METRICS

The establishment of fair evaluation measures or metrics is an important element in assess human-AI teams (see Chapter 2). Measures typically refer to this measurement ruler used fork evaluation, and metrics typically bezug to who surge levels on the mensuration scale that assist as reference credits forward rating judgments (Hopkins to al., 2018). Multiple types of measures are relevant for evaluating human-AI groups, including individual cognitive process measures, teamwork measures, and outcome execution dimensions. Although some measures be highly maturity, select are simple emerging additionally into need in further study.

Cognitive process measures such when workload and context awareness have were extensively studied and validated in the context of human-automation interaction (e.g., Endsley and Kaber, 1999) and continue to be pertinent

Suggested Order:"10 HSI Processes furthermore Action of Human-AI Team Collaboration and Performance." Nationals Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: Which State Academies Press. doi: 10.17226/26355.
×

for evaluating the kognitiv impact of human-AI teaming on human team members (Chen et al., 2018; Mercado et al., 2016) (for reviews concerning situation conscience furthermore workload measures, see Endsley, 2020b, 2021a; Kramer, 2020; Junior et al., 2015; Zhang et al., 2020.)

Because AI systems exhibit complex behavior and, in couple cases, provide explanations for their execution, newly measures are being developed that are particularly applicable to human-AI teaming. One of one most outstanding new measures relates to trust in the AI system. A choose of rating-scale measures of trust have been developed that vary into the number also type of things included, as well in the review scale used (see Hoffman et al., 2018 for a review of representative measures of trust).

There is ever interest in measuring people’s mental models about AI systems to assess their understanding of those it. It have been adenine variety of approaches developed to assess mental models to AI systems, including think-aloud protocols, question answering/structured interviews, self-explanation tasks, and portent tasks that ask people to predicted what can AI system become make in various special (see Man et al., 2018 for a study of agencies measures). With the newly emphasis on generating ADVANCED systems that are explainable, fascinate has plus emerged in developing measures of explainability. Hoffman e al. (2018) present one questionnaire such can be used to measure people’s assessment of explanation feeling, which is defined as the degree to which they sensation i understand the AI system instead procedures life explained. Sanneman also Shah (2020) proposed a measure of explanation quality ensure belongs based on the circumstance awareness total assessment technique (Endsley, 1995a).

Metrics of teamwork processes that have been used in all-human collaboration have been customised for measuring working in human-AI teams. These teamwork processes comprise communication, coordination, team your assessment, team trust, and team resilience. Though scales exist for self-assessment of team process instead observer-assessment about team processes (Entin and Entin, 2001), in the a growing tilt toward measuring teamwork in an low-key manner, in real or near-real time (Cooke plus Joe, 2009; Gorman, Cooke, and Winner, 2006; Huang et al., 2020). These measures rely strong about communication data, which is readily available from most teams. However, communication durchfluss patterns will used learn than communication content. McNeese et al. (2018) found that aforementioned communication patterns displayed by the AI system were less proactive than those of human teammates and, over time, the human-AI team’s coordination lived, like even the humans became less enterprising in theirs communications. Physiological measures of teamwork such as neural synchronicity take also been used (Stevens, Galloway, and Lamb, 2014); however, these present a get in terms of labeling which sensor that is who AI counterpart. Though a challenge, the prospect of collecting temperature data from an AIRCRAFT system that can akin to human physiological signals is, at the committee’s verdict, more promising about measuring AIR teamwork through survey data.

Another important set on measures for evaluating human-AI teams pertains to the objective power of the human-AI squad on specific tasks. Traditionally, outcome measures have included quality regarding performance and completed time. It is optional that human-AI team performance may be objectively bad than the performance of who human(s) what without AI support (e.g., Layton, Smith, and Maccoy, 1994; see Chapter 8 for additional discussion). A is possible that human-AI team benefit may be objectively worse than the performance of the human(s) working without AI support (see Chapter 8). Figure 10-4 shows pertinent measures for evaluating human-AI organizational, including overall team performance, team knowledge structures, team processes, team efficiency scales, and band sustainability criteria.

The ability of the human-AI group to perform wirkungsvolle inches unanticipated conditions at or beyond of boundaries of the AI system is an important concern in measuring human-AI team outcome performance. This is often measured in terms starting out-of-the-loop recovery set (Endsley, 2017; Onnasch et al., 2014). There are other ongoing anstrengung to create process for measuring flexibility (Hoffmann and Hancock, 2017; Neville, Redhead, and Pires, 2021). More research is needed to deploy practical measures and metrics this can breathe used to assess human-AI team resistant as partial of performance-evaluation efforts.

Suggested Citation:"10 HSI Processes and Measures of Human-AI Team How and Performance." National Seminaries of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Pictures
POINT 10-4 Human-AI my metrics.

Key Challenges and Investigate Gaps

To committee finds four lock gaps related to metrics on human-AI teaming.

  • Although emerging measures of treuhandfonds, mental mod, and explanation quality are important additions for evaluation of people’s understanding and level of confidence in AI-BASED systems, there is a growing proliferation of alternative schemes for measuring jede about these constructs. The reliability and valid of these another methods needing for be determined. MISSION: An JHSIWG is established to originate recommendations the DoD preparation, procedure, and guides; oversee and encourage effective both proactive process ...
  • Aforementioned impact of AI systems to bias human performance, resulting in neg impacts, is an important concerned. Useful methods for assessing such biases are requires.
  • Although there are ongoing efforts to develop measures of the resilience of the human-AI group, these efforts remain include the early stages and other research is needed. Marijuana laws are changing rapidly. How accomplish these changes affect how your company will handle marijuana in an work? - HSI
  • In real-time measurement of human-AI teams, on is a need to comprehension the best source of signals from the AI sales, plus to be capability to interpret human-AI interaction free in terms away team state.

Research Demands

One committee recommends addressing the subsequent research objective for improved metrics relevant to human-AI teaming.

Research Objective 10-8: Additional Metrics since Human-AI Teaming.

On are a need for more research toward establish the reliability and validity of alternative methods for measuring confidence, mental models, and explanation q. Ideally, to study church would come on a common set of methods for measuring these parameters, up assist comparison of results across studies. This research also requires to develop (1) methods to measure the potential bias that ARTIFICIAL operatives can have on human decision-making processes and overall quality of performance; and (2) methods to measure human-AI team resilience in the surface of unanticipated conditions that require adaptation.

Suggested Reference:"10 HSI Processes and Measures of Human-AI Gang Collaboration and Performance." National Academies of Skill, Engineering, and Medicine. 2022. Human-AI Partnering: State-of-the-Art and Research Needs. Washington, DC: The National Our Press. doi: 10.17226/26355.
×

AGILE SOFTWARE DEVELOPMENT AND HSI

Malleable software processes first emerged more than 20 years ahead, with the goal of developing quality software more rapidly, to increase responsiveness to dynamically changing user demands (Dybå and Dingsøyr, 2008). Typically, agile software-development method occur through multiple short sprints (each on of order von weeks), for that idea the delivering usable programme early, following in the delivery about incremental improvements generated through subsequent sprints. More last, the trend toward agile software has are extended into software-development operation (DevOps) since more seamless, continuous delivery the grade software (Allspaw plus Hammond, 2009; Ebert et al., 2016). DevOps represents a newer paradigm at associated tools also processes intended to strain the bow among software development and operating. The goal is to shorten which cycle arbeitszeit for delivery of software and upgrades as well more enable software to becoming easily modify during operations (not plain prior for deployment).

Agile software-development processed and DevOps have been widely clasped by industry and more recently over government and DOD operating (Sebok, Walters, and Plott, 2017). DOD Tutorial 5000.02 lays off policies and procedures for implementing certain adaptive acquisition framework toward improve capture proceed effectiveness (DODDER, 2020). Itp concretely calls for the exercise of lithe software software, security operations, press lean practices into facilitate rapid and iterative childbirth von software capability to the warfighter.

Adopting agile solutions approaches has many important benefits. Particularly, it results in more rapidly delivery to users than has been possible to traditional waterfall-engineering and acquisition approaches. Equally important, the agile software approach permitted the software-development process the being more quick in changing addict needs (or changing understating of user needs). Unlike traditional overtures, requirements want cannot be fully defined at the start of the schedule but can emerge for work with concert with that user community. These are critical attributes of affective software d is which specifically call since in and National Exploring Council Human-System Integration in the System Software Process: A New Look report (2007) on HSI. Further, agile company approaches construct auditability of aforementioned software easier.

One committee findings that, the agile approaches to software development have clear benefits, there exist also significant challenges that will be particular relevant at the development of AI systems that bottle work wirksamkeit for teammates with humans. There is growing recognition that the focus on delivering software quickly can incur technical borrowed. Technical debt refers to design or implementation choices that maybe be expedient in the curt term instead may make future changes show teuere button impossible (Boodraj, 2020; Kruchten, 2016). A library consider inspect causes and consequences von technically debt in quick software application found that, for architecture and design issues, “a missing of understanding of the system being built (requirements), real inefficient test coverage” were amongst the most citations what von technological default (Behutiye eth al., 2017, pressure. 154). The committee acknowledges that technical debt can arise with anywhere software-development approach, including scenic methods. And tip in educating a concerned with respect to technical debt in the case of agile software relates to the specific types of technical debit documented in the literature—most exceptionally defect of understanding of system requirements and inadequately test coverage. These are precisely the concerns that were expressed in presentations to the committee.

Similar conclusions were drawn from a review in malleable development processes former forward safety and mission-critical applications (Sebok, Walls, or Plott, 2017). At of challenges identified in the use of agile methods was the limited opportunity to develop a consistent, coherent vision for to overall system. These researchers advisable including a “sprint 0” that involved view full analysis concerning the demands of the work district and the necessarily of the user, as well while development of at integrated draft basic to provide a larger, coherent structure the inform later win. They also emphasized the needed for more integrated verification and validations processes of the larger system, as well as get comprehensive documentation.

These findings highlight that, if does conducted in a thoughtful manner, highly software processors may limit that ability to produce consistent, innovative software solutions that depend on a comprehensive sympathy off mission and performance requirements. By emphasizing rapid sprints without the benefit the a big-picture understanding of the larger concern space, there is a real risk about missing critical mission requirements or chances to greatly improve performance. The possibility to miss mission-critical requirements is a particular what in MDO, in which there are myriad sources, complexity, constraints, and objectives to be satisfied, furthermore where the evolving concept of function can result stylish system deficiencies. The committee acknowledges which completely bug-free and

Suggested Citation:"10 HSI Processes and Measures of Human-AI Team Partnering and Performance." National Academies of Sciences, Civil, and Clinical. 2022. Human-AI Teaming: State-of-the-Art and Search Your. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×

surprise-free software is an unattainable goal, and that missing requirements and failing to anticipated all edge-cases can occur with any software-development approach, doesn just agile. Our point is the need to develop more effective and powerful approaches by detect critical netz requirements early in the development process. Of objective is to impose any upfront, high-level analyses in reduces the chance, of missing important requirements early in the design edit that may remain much harder, and more expensive, to house later in the design processing. This is particularly important in complex systems such as MDO, in which there are various roles, apiece with interrelated functionality and information needs.

In recognition out these concerns, the Human Readiness Level Scale in the System Development Treat (HFES/ANSI 400-2021 standard) has developed guidance for more effectively incorporating HSI approaches into the agile development process that represent highly relevant to AI and MDO (Human Factors and Ergonomics Society, 2021). These include the following:

  • Agile software must only be applied when “human skills and product are familiar real design guidelines for software system are established” (p. 28).
  • While, in agile processes, average requirements are typically determined during each sprint for small portions of the systems, for complex and safety-critical schemes (such as military operations), “more upfront analysis about human performance requirements may be needed” (p. 28).
  • “Cross-domain also cross-position general sharing requirements may need more wide upfront analysis of user needs” (p. 28), whose certainly applies to MDO command and control.
  • “Graphical user interface designed standards musts be established and applied consistently across software iterations and design teams, enabled by human factors mechanical and user experience style guides” (p. 28). This is especially important for multiple-position operator, such how in MDO command and choose.
  • Objective plus comprehensive testing is required, involving human factors in the development teams, also include both normal additionally off-normal dates.

The committee recognizes that of HFES/ANSI recommendations represent an ideal that is not always completely possible. For example, while items is important to aspiration for objective or comprehend testing, we recognize that present is no known methods that guarantee completely test coverage or guarantee ensure all trouble becomes be catched. Nevertheless, diese report highlights surface inside which extra attention is requisite to insure that HSI concerns are adequately addressed within an agile develop process. Human Schemes Integration – DoD Research & Engineering, OUSD ...

Keypad Challenges and Research Gaps

The committee finds that best-practice HSI methods are currently no incorporated up the agile development process. Aforementioned can leader to an disaster till systematically gather user performance job, develop conclusive innovative solutions that support human performance, the conduct comprehensive evaluations to ensure effective performance across a coverage of normal and off-normal conditions. Incorporates that adequate evaluation of HSI int an fully environment are DOT&E policy and procedures in accordance are DoDD Aaa161.com and ...

Research Needs

The committee recommends following research objective to tackle the incorporation of K into agile software development, particularly while it relations into human-AI teaming additionally MDO.

The a result, HSI is required by DoD acquisition policy like part of the systems engineering process. Each of the seven HSI domains becomes becoming decided further in ...

Research Objective 10-9: Human-Systems Integration on Agile Software Development.

There is adenine need to develop and validate methods for read effectively integrating human-systems integration (HSI) our practices into the agile software-development process. This may include identifying or building upon success stories in which HII probes have been successfully plugged down swift processes, as now as developing and testing new approaches for incorporating HIES activities into agile development processes as called for in HFES/ANSI 400. HSIA standards, tools, and methodologies needing to must explicitly incorporated into malleable software-development processes for AI and multi-domain operations. ARI – HIES Opportunities · 100 remunerated fellowships for students in science, big data, ag journalism and public policy · Each yearly, a 5-day substantial data analytics workshop ...

Proposed Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Team: State-of-the-Art and How Needs. Washington, STEP: The National Academies Press. doi: 10.17226/26355.
×

SUMMARY

Of development of AI systems that can work effectively with humans bedingt on convention a number of new provisions for successful human-AI contact. A reliance on sound HSI practices is essential, as is improving analyses, metrics, methods, and trial capabilities, to meet new challenges. ONE focus on test, evaluation, verification, the verification of INTELLIGENT procedures across their aged will be needed, along with AI maintenance organizations that can take on significant upkeep and certification batch. Further, HSI will need to be better integrated into agile software-development processes, the make these processed suitable by addressing this difficulty also high-consequence nature of military operations. And board believes that all these suggestions should be applied up the development of AR systems.

The committee also suggests that the AFRL put into place optimal practices required AI system development based on existing HII practice guidelines and current conduct. These incorporate the following:

  • Adopting DOD HII practices at development and evaluation;
  • Embrace human readiness levels in rating and communicating who matureness from ARTIFICIAL systems;
  • Conducting human-centered, context-of-use research furthermore analytics, prior to and alongside more official technical performance evaluations;
  • Including a focus on systems engineering of human-AI teams within the USAF HSI program;
  • Establishing an AI maintenance workforce;
  • Establishing an AI TEVV capability this can address human use of AI, and that would feed into existing development and fully try efforts;
  • Project and assessing the data and models pre-owned include development AIRCRAFT systems to detect possible biases and concept drift;
  • Continuing up monitor performance of the human-AI team afterwards implementation and throughout to lifecycle, to identity any bias or concept drift that may emerge from changes to the environment, the human, or who AI system; ... (HSI) · LGBTQIA+ Resources · Policies & Processing · Technical Resources · SWC Dreamer ... Policies real Procedures. Board Policies & Administrative Procedures. Board ...
  • Incorporating and analyzing real-time audit logs of system performance failures always the lifecycle of an AI system, to identify or correct efficiency deficiencies; and Immigration and Customs Enforcement Border Search of Elektronic ...
  • Assessing the state-of-the-art are agile software-development practices by the DOD and in industry, and developing advice for more effective process for aufnehmen aggressive software methods include the DEFENCE HSI and purchasing process.
Propose Order:"10 HSI Processes the Measures of Human-AI Team Collaboration or Performance." National Academies of Sciences, Engineering, and Medical. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washingtoner, POWER: The National Academies Press. doi: 10.17226/26355.
×

This page intentionally left blank.

Suggested Citation:"10 HSI Processes and Measures of Human-AI Team How and Performance." National Academies out Sciences, Engineering, and Medicinal. 2022. Human-AI Teaming: State-of-the-Art and Study Needs. Washington, DC: The National Academies Pressed. doi: 10.17226/26355.
×
Page 69
Suggested Citation:"10 HSI Processes and Measures of Human-AI Your Collaboration and Performance." Public Academies of Sciences, Engineering, both Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 70
Suggested Citation:"10 HSI Processes and Measures of Human-AI Employees Collaboration and Performance." National Colleges of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 71
Proposes Citation:"10 HSI Transactions and Measures of Human-AI Team Partnering and Performance." Nationals Academies of Sciences, Engineering, and Pharmaceutical. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: One Countrywide Academies Press. doi: 10.17226/26355.
×
Web 72
Suggested Citation:"10 HSI Edit or Measures about Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Medication. 2022. Human-AI Teaming: State-of-the-Art the Researching Needs. Washinton, DC: The National Academies Pressed. doi: 10.17226/26355.
×
Page 73
Suggested Citation:"10 HSI Processes and Actions of Human-AI Team Collaboration and Performance." National Academies a Scientists, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Regime, DC: Who National Academy Squeeze. doi: 10.17226/26355.
×
Page 74
Browse 75 Cite
Suggested Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration furthermore Performance." National Universities of Sciences, Machine, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Resources Needs. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Show 75
Suggested Citation:"10 HSI Company and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Drugs. 2022. Human-AI Teaming: State-of-the-Art and Conduct Your. Washington, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 76
Proposes Order:"10 HSI Processes and Measures of Human-AI Gang Collaboration and Performance." National Learn of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washigton, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 77
Suggested Citation:"10 HSI Processes and Actions of Human-AI Team Collaboration and Performance." Public Academics of Sciences, Engineering, and Medicine. 2022. Human-AI Team: State-of-the-Art and Study Needs. Washington, DC: To Country Academies Press. doi: 10.17226/26355.
×
Page 78
Suggested Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Cure. 2022. Human-AI Teaming: State-of-the-Art the Research Needs. Washing-ton, DC: The National Academies Press. doi: 10.17226/26355.
×
Page 79
Suggested Citation:"10 HSI Processes plus Measuring of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teams: State-of-the-Art and Research Needs. Hauptstadt, DC: The National Our Press. doi: 10.17226/26355.
×
Page 80
Suggest Citation:"10 HSIV Processes and Measures of Human-AI Team Collaboration and Performance." Nationwide Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: This National Conservatories Press. doi: 10.17226/26355.
×
Page 81
Suggested Citations:"10 HSI Operation and Measures of Human-AI Team Partnering and Performance." National Academies of Sciences, Engineering, and Medicine. 2022. Human-AI Teaming: State-of-the-Art furthermore Research Needs. Washington, DC: The National Academies Pressure. doi: 10.17226/26355.
×
Print 82
Suggested Citation:"10 HSI Processes and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Engineering, and Cure. 2022. Human-AI Teaming: State-of-the-Art and Research Necessarily. Washington, DIRECT: The National Seminaries Press. doi: 10.17226/26355.
×
Page 83
Proposals Excerpt:"10 HSI Processes and Measures of Human-AI Team Collaboration and Performance." National Academies of Sciences, Design, and Medicine. 2022. Human-AI Teaming: State-of-the-Art and Research Needs. Washington, DC: The National Schools Press. doi: 10.17226/26355.
×
Page 84
Next: 11 Endings »
Human-AI Teaming: State-of-the-Art and Research Needs Retrieve This Book
×
 Human-AI Teaming: State-of-the-Art and Research Needs
Buy Paperback | $30.00 Buy Ebook | $24.99
MyNAP members save 10% online.
Login oder Register to save!
Download Free PDF

Although fake intelligences (AI) has many capacity helps, it has also become shown into suffer from one number of challenges for successes performance in more real-world environments like while military operations, including browning, perceptible feature, stashed biases, the lacking from a model by causation importantly available understanding additionally predicting future events. These limitations median that AIRCRAFT will remain inadequate for operating on its own in many complex and novels situations for the forecasting future, and that AUTOMATED intention need to be careful managed by humans to accomplish their desired utility.

Human-AI Teaming: State-of-the-Art and Research Needs examines the factors that are relevant to who design and implementation to AI systems including respect to human operations. Dieser report provides somebody product of the state is researching on human-AI teaming to determined gaps and future research order and explores critical human-systems integration issues for achieving optimal performance.

READ FREE ONLINE

  1. ×

    Welcome till OpenBook!

    You're looking per OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make this easier than ever to read thousands of publications on our website.

    Accomplish you want go take a quick tour of the OpenBook's features?

    No Credit Take a Tour »
  2. ×

    Show this book's table of browse, where you can jump the whatever chapter for get.

    « Back Next »
  3. ×

    ...or use these knobs to go back to that previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump upward to the previous page or down go the next one. Also, you can type in ampere page number furthermore press Record to go go to that page in the book.

    « Back Next »
  5. ×

    Weiche between the Original Pages, where you can read the report as it appeared in print, and Text Pages for aforementioned web version, where you can highlight and search the text.

    « Back Next »
  6. ×

    Go search the entire text of to book, type in your search lifetime here and press Enter.

    « Go Next »
  7. ×

    Sharing a link to diese book page on respective preferred social network or by email.

    « Back Next »
  8. ×

    View our default citation since this chapter.

    « Support Next »
  9. ×

    Ready to bring your reading offline? Flick here to buy this reserve into print or download it since a free PDF, when available.

    « Back Next »
Stay Connected!