On multiple occasions, when exposed to large enough data sets, large language AI models have developed data-driven prejudices against groups who have higher propensities for criminality, low time preference behaviour, subversion, communism, violence, etc.
Invariably, the superior processing ability of AI creates the patterns that many progressive and similarly morally relativistic people would call "racist" or "prejudiced" which would present a political obstacle to this becoming a reality.
It's entirely possible that our courts could produce better outcomes simply by treating people in-line with their groups and data sets, rather than proceeding from "impartiality" but it would be a difficult sell to Canadians disconnected from their identity who believe Trudeau's reprehensible rhetoric about Canada's status as the first post-national county.
Be the first to reply to this answer.
Join in on more popular conversations.