Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
Discussions from these authors are shown:
Province/Territory:
@B4N2F7D5 days5D
No, artificial intelligence should not be a priority of usage in Canada, as we account for a large amount of the worlds freshwater; which AI uses to function.
no, no weapons system should have no human oversight, an autonomous drone system is acceptable only if targets are designated by a human operator or a specfic set target
@B4GBZQG2wks2W
No, the use of Artificial Intelligence in war removes the human morals to ending life and will lead humanity down a path of immoral destruction.
Yes, As long as the decision making on the launch and timing of the sort of attack or defence was coordinated and approved by a human and AI was used for intelligent guidance only.
@9XCDJK26mos6MO
It depends how well the tech can be trusted, it would have to go through years and years of tests before official use since it may have ''Glitches''.
@9WFR74Q 6mos6MO
Absolutely not. It is vital that we as humans realize and understand the gravity of using weapons to harm other humans. AI may be able to provide functionality and statistics, but it cannot understand the weight of using weapons to harm. Humans themselves are flawed when it pertains to utilizing harmful weaponry, especially in a militaristic setting. It is a slippery slope to utilize AI for weapons.
@9VRM7F46mos6MO
Yes, if there is a person to monitor the weapon incase of ai malfunction
@9VJLT3Z6mos6MO
Its going to happen inevitably anyways. We need an AI ethics commission... because its going to be extremely dangerous if AI goes rogue or can be hijacked in any way.
@9V4JT2C7mos7MO
No there also needs to be a person could use it to help
@9P8NRFMNew Democratic 10mos10MO
Instead of artificial intelligence, military technology should have advanced programs/technology that can be controlled by professionals.
@9P7NSTC10mos10MO
No, not until AI has been perfected for Military uses
@B2S4PY32mos2MO
After sufficient training time, to determine full impacts, limitations, and abilities of the technology.
@9TGDVKNIndependent7mos7MO
yes, but only when it is safe to use. if it's a ranged weapon where there aren't people who would be in front of it then by all means, AI doesn't have emotions they're more accurate and safer when operated correctly.
@9TF5F5Z7mos7MO
I believe missile guidance systems should use AI, but AI should never choose where to target
@9T2Z7Y57mos7MO
Yes, but ethical research needs to be conducted and needs to be at the forefront of the militaries AI investment
@9MC4BQL11mos11MO
Depends on how good it's gotten. I'd have to see some damn good examples of it being better than human hands.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.