Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
Province/Territory:
@ISIDEWITH8mos8MO
No
@9MG74RS6mos6MO
If you just happen to fit a vague description of a target, hour would you feel? Would you trust a drone not to take you out just in case?
@ISIDEWITH8mos8MO
Yes
@9MG74RS6mos6MO
AI doesn't care about people. It doesn't truly understand. This is a slippery slope. Moreover it's still only as good as the training model that is human made.
@9ZN65GC3 days3D
Yes, against my better judgement. We will need to use AI to stay abreast of our adversaries, or be left behind.
@9ZDCX9T2wks2W
No, there is not enough testing or information to say that an AI can distinguish civilians, military personnel and threats.
@9XCDJK23wks3W
It depends how well the tech can be trusted, it would have to go through years and years of tests before official use since it may have ''Glitches''.
@9WKLBWB4wks4W
yes, but only after trials, research and contingencies to ensure it is not a threat to ourselves.
Who controls the AI, was it created by our government or a private company or worse by another country.
@9WFR74Q 1mo1MO
Absolutely not. It is vital that we as humans realize and understand the gravity of using weapons to harm other humans. AI may be able to provide functionality and statistics, but it cannot understand the weight of using weapons to harm. Humans themselves are flawed when it pertains to utilizing harmful weaponry, especially in a militaristic setting. It is a slippery slope to utilize AI for weapons.
@9W2QDRF1mo1MO
Yes, but there should be checks and balances to it
@9W222F21mo1MO
yes, once the artificial intelligence has advanced enough.
@9VY8CNN1mo1MO
No, this could too easily lead to a doomsday scenario. Keep AI out of the military!
@9VRM7F41mo1MO
Yes, if there is a person to monitor the weapon incase of ai malfunction
@9VJLT3Z2mos2MO
Its going to happen inevitably anyways. We need an AI ethics commission... because its going to be extremely dangerous if AI goes rogue or can be hijacked in any way.
@9VJ6C4K2mos2MO
Yes but only if it is proven to not glitch or be in risk of being hacked.
@9VF4NS92mos2MO
Yes, but only is it is more accurate than it is under human's control.
@9V7JKBZConservative2mos2MO
No, weapons of mass destruction should not be built to begin with
@9V4JT2C2mos2MO
No there also needs to be a person could use it to help
@9TYFLTG2mos2MO
No. Keep a human in the loop for all lethal engagements.
@9TTHJVP2mos2MO
Yes, but in a highly controlled and scrutinized manner.
@9TRP8FJ2mos2MO
Yes, but for defensive systems only. There should always be a human in the loop pulling the trigger for offensive systems.
@9RBYBX64mos4MO
In belief that Artificial Intelligence will be a downfall to mankind, as most of the world will have access, it makes this question difficult to answer. It is important that Artificial Intelligence be used with caution.
@9RBY87R4mos4MO
The military should maintain technological pace with our allies
@9RBVDVT4mos4MO
There should always be humans in the lethal force decision making process.
@9QZCYDN4mos4MO
Not entirely guided. And also not complete total AI that can think for itself like a human. Otherwise I think it'd be effective
@9QSV5BH5mos5MO
Yes, as long as it is pretty much guaranteed they will not fail.. Like ever.
@9QRJNMW5mos5MO
Yes, but only if there is always a human kept in the decision making loop.
Yes, if only it will decrease the risk of hurting civilians.
@9Q7YMJZ5mos5MO
Yes, but only with appropriate oversight and against specific military targets
@9PRH44K5mos5MO
Not a yes or no answer. There needs to be more clarification on whether the AI is making all decisions up to firing the weapon or just controlling it to the target.
@9P8NRFMNew Democratic 5mos5MO
Instead of artificial intelligence, military technology should have advanced programs/technology that can be controlled by professionals.
@9P7NSTC5mos5MO
No, not until AI has been perfected for Military uses
@9LT2W3W7mos7MO
Not right now, the technology is not fully developed enough yet
@9LHXK8GConservative7mos7MO
Not at this time, and not until unbiased third parties review the technology further and there is more scientific consensus.
@9LGCYKF7mos7MO
we will eventually just be creating insane fighting robots we would need to use nuculear weapons to destroy them just to be safe. this will definitely mislead wars and be way to unsafe.
@9TLDMJL2mos2MO
According to the safety of canadians and trustability of the tech.
@9TGDVKNIndependent2mos2MO
yes, but only when it is safe to use. if it's a ranged weapon where there aren't people who would be in front of it then by all means, AI doesn't have emotions they're more accurate and safer when operated correctly.
@9TF5F5Z2mos2MO
I believe missile guidance systems should use AI, but AI should never choose where to target
@9T6X9HJ2mos2MO
NO, and even the current use of AI should be strictly supervised and limited by public decisions and not privatized.
@9T6GQ6F2mos2MO
The military must operate under the guidance of the revolutionary working class and only until the class divide is abolished worldwide.
@9T2Z7Y53mos3MO
Yes, but ethical research needs to be conducted and needs to be at the forefront of the militaries AI investment
@9SNJQRW3mos3MO
Yes, but not fully AI. There needs to be constant human review and extensive research beforehand.
@9MC4BQL7mos7MO
Depends on how good it's gotten. I'd have to see some damn good examples of it being better than human hands.
@9LW6J337mos7MO
Any weapon with the potential to kill or injure should not be 100% AI autonomous.
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.