Using machine learning technology in the context of cyber defence/crime makes sense, provided that the program generates 'alerts' to be reviewed by a human, and the program is not given any abilities to make 'decisions' autonomously.
Special consideration should be given to monitoring for false positives and proper controls should be put in place to mitigate AI 'hallucinations'. Before anything were to be developed or put in place, legislation around data protection and ethical guardrails for AI use needs to be passed and implemented.
As part of that legislation, any program should undergo rigorous testing and risk assessments prior to deployment and should NEVER be used for actual warfare (e.g. AI-assisted drone strikes).
Be the first to reply to this answer.
Join in on more popular conversations.