Academic Integrity
With the rise of AI technology, there is concern that some students may use AI tools without their instructor's permission to generate content for their coursework instead of doing the work themselves. This not only goes against the University's Academic Integrity Regulation, but it also undermines the purpose of higher education. For suggestions on how to responsibly use AI as a student, visit the AI & Academic Integrity page on this guide.
Bias and Discrimination
AI systems are trained on huge amounts of data, and that data may contain biases and prejudices which influence its outputs. This means that AI may be unintentionally perpetuating harmful biases through its generated content. This is especially concerning as AI is being integrated into various parts of our lives including healthcare, hiring decisions, and the criminal justice system.
Misinformation
AI tools have been shown to "hallucinate" (generate false or made-up information). Be sure to analyze all generated AI content and compare it to reliable sources. Remember that AI relies on its data (which can be inaccurate, biased, and unreliable) to generate responses.
Deepfakes
Deepfakes are AI-generated images, audio, or video material that manipulate existing media or generate fake media. Often, deepfakes manipulate the face and/or voice of an individual to make it appear like they are saying or doing something that never took place. Some of the main concerns surrounding the creation of deepfakes include violation of individual rights, spread of disinformation, identity theft and fraud, and copyright infringement. Learn more about the harms caused by deepfakes from the Canadian Security Intelligence Service.
Ownership and Intellectual Property Issues
If you use AI to create something, who does it belong to? Determining intellectual property rights has become complicated with the rise of AI tools. AI-generated content might also include elements from existing works or copyrighted material, which presents challenges when it comes to fair dealing and copyright infringement. If you have any questions about copyright, visit our Copyright Guide or contact copyright@smu.ca.
Lack of Regulation
With AI quickly advancing, there is a need for regulation. Without proper protocols put in place, we risk having AI technologies that are not developed or used in a responsible and ethical way. Currently, there is no comprehensive legislation in Canada specific to AI. However, as part of Bill C-27, the Canadian government has proposed new legislation, the Artificial Intelligence and Data Act (AIDA), to ensure the responsible development and use of AI systems. To learn more about AIDA, visit the link below.
Sources:
Ferrara, E. (2023). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. ArXiv. https://doi.org/10.48550/arxiv.2304.07683
Innovation, Science and Economic Development Canada. (2023). Artificial Intelligence and Data Act. https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
UNESCO. (2023). ChatGPT and artificial intelligence in higher education. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000385146
UNESCO. (2023). Guidance for generative AI in education and research. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000386693
This H5P object was created by Rebecca Sweetman (Associate Director, Educational Technologies at Queen's University) and is available via a Creative Commons BY-NC-SA license.
Check out this video from Exposure Labs that explores the topic of AI algorithms and discrimination:
Search these databases for recent cases and legal research on AI issues: