Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
Province/Territory:
@ISIDEWITH1yr1Y
No
@9MG74RS11mos11MO
If you just happen to fit a vague description of a target, hour would you feel? Would you trust a drone not to take you out just in case?
@ISIDEWITH1yr1Y
Yes
@9MG74RS11mos11MO
AI doesn't care about people. It doesn't truly understand. This is a slippery slope. Moreover it's still only as good as the training model that is human made.
@B4GBZQG4 days4D
No, the use of Artificial Intelligence in war removes the human morals to ending life and will lead humanity down a path of immoral destruction.
Yes, As long as the decision making on the launch and timing of the sort of attack or defence was coordinated and approved by a human and AI was used for intelligent guidance only.
@B45XZ6W2wks2W
I don’t agree with any murder of anyone but I suppose it’s not a horrible thing to use AI for fighting over humans
@B3QLCP31mo1MO
AI Technology is still relatively new, and should be developed more before applying it to weaponry and defense applications
@B324X4F2mos2MO
I feel like the development of this technology comes with time and experimenting, so we should give it a few years.
@9ZDCX9T5mos5MO
No, there is not enough testing or information to say that an AI can distinguish civilians, military personnel and threats.
@9XCDJK25mos5MO
It depends how well the tech can be trusted, it would have to go through years and years of tests before official use since it may have ''Glitches''.
@9WKLBWB6mos6MO
yes, but only after trials, research and contingencies to ensure it is not a threat to ourselves.
Who controls the AI, was it created by our government or a private company or worse by another country.
@9WFR74Q 6mos6MO
Absolutely not. It is vital that we as humans realize and understand the gravity of using weapons to harm other humans. AI may be able to provide functionality and statistics, but it cannot understand the weight of using weapons to harm. Humans themselves are flawed when it pertains to utilizing harmful weaponry, especially in a militaristic setting. It is a slippery slope to utilize AI for weapons.
@9W2QDRF6mos6MO
Yes, but there should be checks and balances to it
@9W222F26mos6MO
yes, once the artificial intelligence has advanced enough.
@9VY8CNN6mos6MO
No, this could too easily lead to a doomsday scenario. Keep AI out of the military!
@9VRM7F46mos6MO
Yes, if there is a person to monitor the weapon incase of ai malfunction
@9VJLT3Z6mos6MO
Its going to happen inevitably anyways. We need an AI ethics commission... because its going to be extremely dangerous if AI goes rogue or can be hijacked in any way.
@9VJ6C4K6mos6MO
Yes but only if it is proven to not glitch or be in risk of being hacked.
@9VF4NS96mos6MO
Yes, but only is it is more accurate than it is under human's control.
@9V7JKBZConservative6mos6MO
No, weapons of mass destruction should not be built to begin with
@9V4JT2C7mos7MO
No there also needs to be a person could use it to help
@9TYFLTG7mos7MO
No. Keep a human in the loop for all lethal engagements.
@9TTHJVP7mos7MO
Yes, but in a highly controlled and scrutinized manner.
@9TRP8FJ7mos7MO
Yes, but for defensive systems only. There should always be a human in the loop pulling the trigger for offensive systems.
@9RBYBX69mos9MO
In belief that Artificial Intelligence will be a downfall to mankind, as most of the world will have access, it makes this question difficult to answer. It is important that Artificial Intelligence be used with caution.
@9RBY87R9mos9MO
The military should maintain technological pace with our allies
@9RBVDVT9mos9MO
There should always be humans in the lethal force decision making process.
@9QZCYDN9mos9MO
Not entirely guided. And also not complete total AI that can think for itself like a human. Otherwise I think it'd be effective
@9QSV5BH9mos9MO
Yes, as long as it is pretty much guaranteed they will not fail.. Like ever.
@9QRJNMW9mos9MO
Yes, but only if there is always a human kept in the decision making loop.
Yes, if only it will decrease the risk of hurting civilians.
@9Q7YMJZ9mos9MO
Yes, but only with appropriate oversight and against specific military targets
@9PRH44K10mos10MO
Not a yes or no answer. There needs to be more clarification on whether the AI is making all decisions up to firing the weapon or just controlling it to the target.
@9P8NRFMNew Democratic 10mos10MO
Instead of artificial intelligence, military technology should have advanced programs/technology that can be controlled by professionals.
@9P7NSTC10mos10MO
No, not until AI has been perfected for Military uses
@9LT2W3W12mos12MO
Not right now, the technology is not fully developed enough yet
@9LHXK8GConservative12mos12MO
Not at this time, and not until unbiased third parties review the technology further and there is more scientific consensus.
@9LGCYKF12mos12MO
we will eventually just be creating insane fighting robots we would need to use nuculear weapons to destroy them just to be safe. this will definitely mislead wars and be way to unsafe.
@B2S4PY32mos2MO
After sufficient training time, to determine full impacts, limitations, and abilities of the technology.
@B2L42TM2mos2MO
Yes, if we send our soldiers to fight wars we should give them every advantage to reduce casualties and shorten the conflict
@B2574KFNew Democratic4mos4MO
Only if AI is advanced enough, it has a mind of its own. Plus, it's computer based so I expect someone tinkering with the A
Make this question more specific. I.e. A.I guidance systems? AI simulated targets? what are we talking about here folks? lets get a little more specific.
@9ZN65GC5mos5MO
Yes, against my better judgement. We will need to use AI to stay abreast of our adversaries, or be left behind.
@9TLDMJL7mos7MO
According to the safety of canadians and trustability of the tech.
@9TGDVKNIndependent7mos7MO
yes, but only when it is safe to use. if it's a ranged weapon where there aren't people who would be in front of it then by all means, AI doesn't have emotions they're more accurate and safer when operated correctly.
@9TF5F5Z7mos7MO
I believe missile guidance systems should use AI, but AI should never choose where to target
@9T6X9HJ7mos7MO
NO, and even the current use of AI should be strictly supervised and limited by public decisions and not privatized.
@9T6GQ6F7mos7MO
The military must operate under the guidance of the revolutionary working class and only until the class divide is abolished worldwide.
@9T2Z7Y57mos7MO
Yes, but ethical research needs to be conducted and needs to be at the forefront of the militaries AI investment
@9SNJQRW8mos8MO
Yes, but not fully AI. There needs to be constant human review and extensive research beforehand.
@9MC4BQL11mos11MO
Depends on how good it's gotten. I'd have to see some damn good examples of it being better than human hands.
@9LW6J3312mos12MO
Any weapon with the potential to kill or injure should not be 100% AI autonomous.
no, no weapons system should have no human oversight, an autonomous drone system is acceptable only if targets are designated by a human operator or a specfic set target
@B2CMT6Q 3mos3MO
No, as it stands artificial intelligence is not reliable enough to make military decisions (this should instead be left to qualified levels of servicemen)
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.