In the flurry of articles about possible uses of newly powerful AI tools, there seem to be a couple of possible interpretations: it’s great and will make everything better and easier! or: it’s the end of the world as we know it and it will ruin education and careers!
The truth is likely somewhere in between, but while the possible applications and shortfalls are being explored, librarians have been having conversations about privacy. You may not realize in your day-to-day use of the library, but librarians work to make sure you’re not being surveilled or tracked in what you do in the library. Particularly post 9/11 and the Patriot Act, the American Library Association has consciously advocated for patron privacy (Witt, 2017) and so when libraries implement any technology, your privacy is something that we always take into consideration - and that is one area where tools like ChatGPT lead to some concerns.
As an academic library, Lemieux Library is also looking at ChatGPT through the lens of academic research ethics; if you are doing any research involving humans, your methodology and protocols have to go through the Institutional Review Board (IRB). This is the mechanism for protecting human research subjects and helps ensure that people involved in research are not exposed to potential harm. As ChatGPT is now openly asking the public to use their tool in order to improve and test their product, it could be considered akin to research - OpenAI even say in their YouTube introduction to ChatGPT 4 “it’s really important to us that as many people as possible participate so that we can learn more about how it can be helpful to everyone” (OpenAI, 2023). Unlike academic research though, there is no transparency or informed consent – you give up personal data and there is no clarity as to how that will be used, shared, or acted upon. In the European Union there are already legal protections that extend to the data that people share on the internet: General Data Protection Regulations (GDPR). There are concerns that ChatGPT is not in compliance with these existing data protection rules and this has resulted in the tool being temporarily banned in Italy already (Piscioneri, 2023). While the legal protections of your data and privacy catch up to new applications of AI technology, I think it’s fair to recommend a little caution with how much of your personal information and user behavior you share with new AI tools like ChatGPT – despite their calls for “as many people as possible” (OpenAI, 2023) to help test their product!
Witt, S. (2017). The evolution of privacy within the American Library Association, 1906–2002. Library Trends, 65(4), 639-657. https://doi.org/10.1353/lib.2017.0022
OpenAI (2023) Introducing GPT-4. [Video] YouTube. https://www.youtube.com/watch?time_continue=184&v=--khbXchTeE
Piscioneri, F. (2023, April 18) Italy to allow ChatGPT to return if OpenAI takes 'useful steps'. Reuters. https://www.reuters.com/technology/italys-data-watchdog-chatgpt-can-resume-april-30-if-openai-takes-useful-steps-2023-04-18/
0 Comments.