On November 22, 2024, Engineers and Geoscientists British Columbia (EGBC) released Use of Artificial Intelligence (AI) in Professional Practice, a practice advisory for EGBC registrants that provides guidelines on the use of artificial intelligence (AI) in professional practice for engineering and geoscience professionals in the Province of British Columbia, Canada (the Practice Advisory). The Practice Advisory includes guidelines related to the use and risk factors and considerations associated with the use of AI in professional practice that may be relevant to design professionals, irrespective of EGBC registration status.
The Practice Advisory discusses certain risk factors related to the use of AI in professional practice and suggests questions to ask in consideration of those risk factors. Risk factors to consider when evaluating the output of an AI system according to the Practice Advisory include biases, which are categorized as either computational/statistical, caused by systematic errors that could influence the output, such as non-representative samples used for AI training sets, or human-cognitive, caused by humans trusting the output of AI over their own judgment when there is no basis to trust said output.
Trustworthiness is another risk factor that the Practice Advisory indicates should be considered when evaluating the results of an AI system. Trustworthiness is discussed within the Practice Advisory in the context the National Institute of Standards and Technology (NIST) AI Risk Management Framework definition of trustworthy AI systems, which are “those without harmful bias and that have characteristics that are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced, and fair,” and mitigating or managing risks in an AI system to a level that is acceptable to the interested parties.
Transparency, explainability and interpretability are noted in the Practice Advisory as characteristics that help humans understand AI systems, which is of import to engineering and geoscience professionals “[w]here the work engages the safety, health, and welfare of the public.” The Practice Advisory also notes that according to the NIST Trustworthy AI: Managing the Risks of Artificial Intelligence, “transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system[,]” “[e]xplainability refers to a representation of the mechanisms underlying an AI system[’]s[] operation[,]” and “[i]nterpretability refers to the meaning of [an] AI system[’]s[] output in the context of their designed functional purposes.”
A lack of repeatability, where there is an inability to replicate the results produced by an AI system, such as instances when the same input into an AI system produces different output, is listed as a risk factor in the Practice Advisory that should be considered when engineering and geoscience professionals evaluate an AI system. Confidentiality, where third parties may control an AI system and anything, including any confidential information, inputted into the AI system may become accessible to third parties and used to train AI models, is listed as another risk factor that should be considered when one evaluates an AI system for use in professional practice.
The Practice Advisory suggests AI-specific steps to add to the “Documented Checks” process outlined in Section 3.3.3 of the EGBC Guide to the Standard for Documented Checks of Engineering and Geoscience Work, such as noting the version of the AI system used, developing test cases to pass through the AI system, noting the results that are outputted, recording input and output data for validation and verification purposes, and considering the need to validate every output of a dynamic AI system, where “output may change based on new learnings, and thus the results may vary from use-to-use and over time.”
According to the Practice Advisory, engineering and geoscience professionals “must assess, understand, and manage or mitigate” harms related to the use of AI systems in their work as they have an ethical duty “to hold paramount the safety, health, and welfare of the public.” The Practice Advisory, referencing Appendix B: How AI Risks Differ from Traditional Software Risks of the NIST AI Risk Management Framework, notes that the risks related to AI are different from the risks related to traditional software, so engineering and geoscience professionals “should understand and remain familiar with how” an AI system functions if they intend to use an AI system in their work as they are professionally responsible for their work including work that is generated by the AI system or includes AI output, and any AI hallucinations included in that AI output.
The Practice Advisory notes that if engineering and geoscience professionals are delegating professional activities to subordinates, they will be professionally responsible for the delegated work including any delegated work that was performed using an AI system or tool so they “must apply the same standard of care as if they were using the AI-based system or tool themselves.”
Design professionals may want to consider these guidelines as they incorporate AI systems into their professional practice. However, if there are any questions regarding any rules or regulations that design professionals may be subject to with respect to the use of AI systems in design professional practice, they should consider retaining an attorney specializing in artificial intelligence issues related to design professional practice.