Have you seen this new Artificial Intelligence creation called ChatGPT? It is the latest new tech advancement that is being talked and written about all over the place. Some folks are predicting that it will be the end of education as we know it, eliminate the boundary between truth and fiction, and change the way information is provided to us.
ChatGPT is an AI product whereby a computer program essentially responds to questions put to it. So, for example, you could type into a ChatGPT chatbox a question like, “How would Bill Clinton describe climate change to a six year-old,” and the AI on the other side of the chatbox would produce a paragraph in typical language used by Bill Clinton and understandable by a six-year-old that would explain climate change. It is pretty remarkable.
But it can also be used to answer an essay question posed by a college professor on a final exam. Or it could be used to write a 600-word column for the Sunday edition of the Waterbury Republican-American on legal issues.
(It’s not being used for that purpose today, except for this part right here, wherein I asked the ChatGPT to “Explain ChatGPT in 50 words to a Sunday morning newspaper reader in columnist format.) This is what I got in return:
“Have you heard of ChatGPT? It’s a state-of-the-art chatbot developed by OpenAI that can generate human-like responses to prompts given to it. ChatGPT uses machine learning algorithms to analyze and understand language patterns, allowing it to communicate in a natural way with users. It’s an exciting development in the field of artificial intelligence and has the potential to revolutionize how we interact with technology.”
Clearly, this is quite an advancement in information production for the masses. But as technology evolves, it can rub up against legal limitations.
More and more businesses are embracing technology to gain efficiency and offload mundane tasks. One of the jobs that AI is being asked to do is post for job openings and screen job applications. But what if the AI screening tools systematically screen out older folks or disabled folks or women for job openings. Certainly the decisions, if left strictly to the screening tools without any input from human supervisors, cannot be said to be intentionally discriminatory if the decisions simply represent a data-driven, algorithmic process.
Nevertheless, discriminatory decision-making will still occur. Harm caused by AI is no different than harm caused by humans where discriminatory considerations are made a part of the decision-making processes.
In California last year, the legislature passed a law stating that employers will still be responsible for discriminatory decision-making even if those decisions are made by machines and not humans. Even if the decisions made by the machines are neutral on their face, if they cause discriminatory selections, the entity using the machines will face liability for discrimination.
The likely outcome is that fewer and fewer will embrace this technology absent assurances that it will screen without discrimination, or alternatively, that the technology provider will assume the liability if discrimination is found.
In 2023, I think that advances in AI will begin to define our workplaces while our courts and legislatures will be in a race to keep up.
Leave a Reply