Try the political quiz
+

Filter by type

Narrow down which types of responses you would like to see.

Filter by author

Narrow down the conversation to these participants:

138 Replies

 @ISIDEWITHDiscuss this answer...11mos11MO

No

 @B4F5J94 from Ontario  agreed…1wk1W

AI is easily tricked to be able to get whatever you want, yes it can be useful at times. but no one can stop the ai from leaking private information and putting the puyblic into danger

 @9VFDL8Qfrom Quebec  agreed…6mos6MO

Ai is too heavily relied on in current day history. I feel that if Ai was to turn sentient and turn on us, we would have no counter to them, ultimately rendering us useless to the enemy, and being wiped out. Now, if they were to gain power to our defense weapons, like missiles, transport and nuclear arsenal, they could make the whole world turn on us, or we could be blown off the face of the earth.

 @ISIDEWITHDiscuss this answer...11mos11MO

Yes

 @B4F5J94 from Ontario  disagreed…1wk1W

With how AI is being used in todays community, people are finding ways to manipulate it and command it to give instructions that no citizens should be able to have access to such and the blueprints to a bank or private goverment document

 @B2V73S6from Ontario  disagreed…2mos2MO

Artificial Intelligence has been proven to have dangerous possibilities. The unknown parts of AI lead to a large security risk and could damage the country.

 @B2HWGJZfrom Saskatchewan  disagreed…3mos3MO

AI can be extremely dangerous and it is not an inefficient way to use resources. The amount of resources required to run AI is terrible and will have major effects

 @9VFDL8Qfrom Quebec  disagreed…6mos6MO

Watch Terminator series, Avengers age of ultron, I have no mouth but I must scream, Irobot, and literally every other movie where the Ai turns sentient and turns into the humans.

 @ISIDEWITHasked…7mos7MO

Do you think letting machines make life-and-death decisions in military conflicts is a necessary step forward or does it cross an ethical line?

 @9VNPZW2 from Ontario  answered…6mos6MO

I believe it crosses an ethical line definitely.

To me it makes sense for humans to have to make the decision when it comes to the killing and destruction of other humans.

Due to things such as; Empathy, Morality, and Emotion.

Things an Artificial Intelligence does not have.

Granted, things like that could be PROGRAMMED into the A.I, but the A.I will never be able to come up with such things itself, and eventually, due to flaws in programming/a loophole the designers did not see, a catastrophe could pretty easily happen.

Yeah. I think it crosses an ethical line.
A.I has no personal way to tell when to stop. Only that it needs to get from point A to point B.

And who knows what it might do to get there?.

 @9VMX3HHfrom Ontario  answered…6mos6MO

 @9VGX7TCConservativefrom Nova Scotia  answered…6mos6MO

 @9VFTSSSfrom Nova Scotia  answered…6mos6MO

 @B4GR5JZfrom Quebec  answered…5 days5D

yes, but I think there are material expenses which need to be ramped up prior to investing into advanced technologies.

 @B4FP2QQCommunistfrom Saskatchewan  answered…7 days7D

No, instead of AI, we could use more people to review the applications, and to have more stricter requirements.

 @B4F8RDGfrom Saskatchewan  answered…1wk1W

people should not be able to use ai and at a point they should stop developing them and only use them if nessacery

 @B4DZYGVfrom British Columbia  answered…1wk1W

Ai only exists from current data already provided by humans, just presented in a sometimes convenient manner, but can also contain deadly flaws too frequently.

 @B4DQHBRfrom British Columbia  answered…1wk1W

yes, but not for all, AI is still new and we should wait a couple more years for it to reach a higher potential, so for now we should just work for it.

 @B4DLN6Zfrom Alberta  answered…1wk1W

Not limited to defense. We need to be world leaders in our understanding and development of new and developing technology

 @B4BMB79from Ontario  answered…2wks2W

there's bit of a gray area with artificial intelligence there's some points where they're actually really good but there's also really points that also are not good maybe like one if someone knows how to bypass an AI system they will bypass it why cuz it's pretty easy to find no AI is secure I would say a human brain is more secure than an AI but wherever we're going in the near future I hope AI doesn't get as smart because I'm trying right now without it being self-taught it's probably not going to know much

 @B4B4XQXfrom Ontario  answered…2wks2W

Defense against what? For that matter, how could an unthinking blender that we all know is incapable of parsing and rephrasing, let alone synthesizing information do anything but hurt people? This is a nonsense question.

 @B49LKNYfrom Alberta  answered…2wks2W

for missile guidance systems and radar systems i could understand it, but for attack planning and other things, we should use humans because we are the ones fighting.

 @B42T76Zfrom Manitoba  answered…3wks3W

AI may benefit us in government defence applications, but it also may bring more problems to the surface using AI.

 @B3WLGVHfrom Alberta  answered…4wks4W

AI is very helpful in some aspects but I think overall there are more negative results of prolonged use then positive ones. That doesn't mean it shouldn't be used at all though.

 @B3VDD9Hfrom British Columbia  answered…4wks4W

I think it's a good idea to save more lives in the military, but dangerous if the system gets hacked.

 @B3RXXXKfrom British Columbia  answered…1mo1MO

While AI can enhance security, reduce risks, and modernize the military, it also poses ethical and security challenges. Investing in AI for defense (not just offense), with proper oversight and global cooperation, might be the best approach.

 @B3QNMWTfrom Ontario  answered…1mo1MO

I feel we should as other countries are doing it. However, I do not like this practice regardless of what country is doing it.

 @B3QLCP3from Ontario  answered…1mo1MO

AI Technology is still relatively new and should be developed further before applying it to defense applications

 @B3MY9V3from Ontario  answered…1mo1MO

again, keep the system the way it is, I'd rather leave my house unlocked then use something that's likely to fail at the worst time, same goes for any peace of technology, even if they claim it failproof its likely to fail one way or another.

 @B3HJK9Rfrom Manitoba  answered…1mo1MO

Using machine learning technology in the context of cyber defence/crime makes sense, provided that the program generates 'alerts' to be reviewed by a human, and the program is not given any abilities to make 'decisions' autonomously.

Special consideration should be given to monitoring for false positives and proper controls should be put in place to mitigate AI 'hallucinations'. Before anything were to be developed or put in place, legislation around data protection and ethical guardrails for AI use needs to be passed and implemented.

As part of that legislation, any program should undergo rigorous testing and risk assessments prior to deployment and should NEVER be used for actual warfare (e.g. AI-assisted drone strikes).

 @B3DKGLKfrom Ontario  answered…1mo1MO

I think no one should be relying on AI fully but I belive it is a good assistant tool for tasks as such.

 @B39SLYSfrom Ontario  answered…2mos2MO

Something similar was done, and they tricked the machine simply with a box, by walking, and rolling.

 @B3766BDfrom Ontario  answered…2mos2MO

Yes, but with emphasis on invest. Governments should not be risking the safety of their citizens with current garbage AI, so investment into national and international standards would be wise.

 @B36R5TYfrom Alberta  answered…2mos2MO

To a certain extent, artificial intelligence can be very useful but should not be the factor that determines the security of an entire nation.

 @B36LMM7from Ontario  answered…2mos2MO

I think the wrong development of AI in defense can lead to disasters so I think it should not be legal or be thought as a war crime

 @B36JLNRfrom Ontario  answered…2mos2MO

Transparency, robust regulations for safety and ethics, and security to prevent misuse by malicious people.

 @B368DQWfrom Ontario  answered…2mos2MO

AI should not be used in general for military use, governmental use, economic use, or generic use. It causes harm when produced in large numbers

 @B35QQ6JLiberalfrom Ontario  answered…2mos2MO

Ai can be hacked and can become compromised and would be unsafe to have an ai brain work on something because there may be human errors in there code

 @B34VPRSfrom Ontario  answered…2mos2MO

Yes, but not without human oversite an not for active combat. We need to have human emotions while in combat or we will become the monsters

 @B2ZC6HQfrom Saskatchewan  answered…2mos2MO

Depends. AI could make mistakes with defence but it would make things quicker and more easily managed.

 @B2ZBM3Ffrom Ontario  answered…2mos2MO

AI is an emerging technology that has limitations and implications we do not quite understand. In conjunction with strong policy and good decision -making AI could be a tool used with extreme caution for defense applications.

 @B2VNWSKfrom Ontario  answered…2mos2MO

Under supervision from a committee of experts for the ethics, implications, biases and for safety of the people in mind.

 @B2V74TCfrom Alberta  answered…2mos2MO

Taking away the face-to-face component of fights leads to diminished value for human life. Using AI to bring supplies and go on suicide missions, or to do surveillance, would not be a bad idea, however.

 @B2TS86Rfrom Alberta  answered…2mos2MO

yes and no the military shouldn't rely on ai and should be able to find solutions but using ai to help them if they are stuck on a problem then yes

 @B2SWQFZfrom Alberta  answered…2mos2MO

If there are ways to make the AI extremely safe to use, and have a low to zero chance of being hacked by foreign parties.

 @B2ST4SYfrom Ontario  answered…2mos2MO

I do not believe that that the government should invest in artificial intelligence as it is not the be all end all. with AL you do not have the human being behind it it is only a cold hard computer so when it comes to a life or death situation it could very well go for the most logical rought for example a terast is hiding in a group of 100 people the choice is to wait until that person can be safely removed from the group of inacence or end the 100 lives to end the tarest and save millions later on this would be considered casualties where a human may try and find other rought to deal with this and minamise the casualties the AI may choose to end all the lives for the sack of millions of others living in peace.

 @B2SL784from Alberta  answered…2mos2MO

they should do it only for specific instances but always have a human around to make sure no mistakes are made

 @B2S3FP5from Alberta  answered…2mos2MO

yes, but only for prediction on if things may be coming, AI should not have control over any weapons

 @B2RD6TTfrom Alberta  answered…2mos2MO

yes, but the access it has should be restricted to suggestions. AI should not have the ability to send nuclear missiles in the case of a misidentification.

 @B2PT6BZfrom Ontario  answered…2mos2MO

No, AI cannot be trusted with the biases inherent in its programming. This would be giving power to a discriminatory program instead of relying on human controls.

 @B2DQ5HFfrom Quebec  answered…3mos3MO

No, my life has been altered because I've been lied about by retarded losers. They call it AI, but they you should really study how these people live and act.

 @B2CMT6Q  from Montana  answered…3mos3MO

No, artificial intelligence shouldn't be used to make important and complex military/security decisions.

 @B2BPTZWfrom Ontario  answered…3mos3MO

It can be useful. However, the use of technology for strictly confidential purposes can be tampered with.

 @B2BN37Hfrom Newfoundland  answered…3mos3MO

Not at the moment because, while AI is advanced and is advancing as time goes by, I feel like it’s not advanced enough yet.

 @B2B2C59from Alberta  answered…3mos3MO

It depends, it would really be helpful for AI to take care of defence applications so people can work on other problems but at the same time but it might be better if they are applied onto small defence applications at first fro testing

 @B286ZK4New Democraticfrom Alberta  answered…3mos3MO

Yes, but only if humans have the ability to override these systems in cases of technological corruption (damaged files, hacking, etc.)

 @9W6MXFYfrom British Columbia  answered…6mos6MO

No, at the end of the day AI can have false positives and needs to be constantly kept maintained by humans so might as well just use humans.

 @9W5TSDNfrom Ontario  answered…6mos6MO

Some support investing in AI for defense to enhance national security and efficiency, while others express concerns about ethical implications, potential misuse, and the risks of autonomous weaponry.

 @9W45G3Yfrom British Columbia  answered…6mos6MO

it depends but overall we should be using this as people not for defensive purposes, ai should only be used for learning

 @9W2W4QNanswered…6mos6MO

On one hand Canada will have to stay relevant with other countries however this could lead to an abuse of power. I am undecided at this time as do not understand enough about AI.

 @9VWVZQBfrom British Columbia  answered…6mos6MO

Yes, but should not be replacing any jobs. It should be a research tool opposed to a replacement to human input. It's not powerful enough yet in my eyes

 @9VSPPZLfrom Saskatchewan  answered…6mos6MO

i believe that if the government were to invest in (ai) they should add restrictions and be extremely cautious and at some point take hypothetical scenarios into account. but overall (ai) should be used for everything in term of research and methods on how to possibly do things such as agriculture and environmental safety.

 @9VMWZRKfrom Manitoba  answered…6mos6MO

Artificial Intelligence Should be abolished as it is dangerous and can be easily manipulated, human intelligence is superior. .

 @9VMJ457from Quebec  answered…6mos6MO

Only if not doing so poses a threat to national security -- and not for any other purpose besides national security

 @9VCJBMDfrom British Columbia  answered…6mos6MO

Yes, but only so we don't get far behind in military technology. And as long as it can be overridden easily.

 @9V9JVX3from Ontario  answered…6mos6MO

I believe it should be utilized, and maintained by the proper people of power, and be used for good against humanity. For example: in wars to be used against certain gases, and more.

 @9V9BQ4Cfrom Alberta  answered…6mos6MO

There are good and bad qualities to artificial intelligence so it all depends on the applications of the AI and where the spending goes towards it.

 @9TZKMBHfrom Alberta  answered…7mos7MO

Other countries may use it, but it is certainly a scary idea, and I wouldn't want to be without it if they attack us with it, so yes and no.

 @9TZHZWZfrom Alberta  answered…7mos7MO

Artificial Intelligence is good to invest in, unless it is used to protect us. I trust real people with morals rather than a robot with my safety and security.

 @9TY279Wfrom New Brunswick  answered…7mos7MO

Yes, as long as the systems are tested on a regular basis to lessen mechanical error as much as possible. Also, don't set up the mostly deadly mechanisms, for example, nuclear missiles.

 @9TWK4RLConservativefrom Ontario  answered…7mos7MO

Defense applications in the sense of the Canadian military being able to use it to identify threats from foreign countries yes.

 @9TV56S7Liberalfrom Alberta  answered…7mos7MO

yes but they need to understand that AI cannot be used for every single defense. we still need to make our own solutions but i can see how AI can assist to improve defense plans

 @9TPFV66from Ontario  answered…7mos7MO

it would advance our technology and we can use it for defense, but i feel like they are gaining way too much power

 @9TP8MJSfrom Ontario  answered…7mos7MO

Yes and no because if this investment is opened to the public then everywhere we go would be ai and a lot of people will use it for unnecessary preposes.

 @B2KVJ7Xfrom Ontario  answered…3mos3MO

Yes, but it should be studied and highly secured to ensure public safety and reduce the risk of losing control.

 @B2JKHRWfrom Pennsylvania  answered…3mos3MO

AI use requires presence of subject matter experts. If Govt is planning on implementing AI they need more SME first

 @B2HKS57New Democraticfrom Quebec  answered…3mos3MO

the potential is there, little by little they could incorporate the use of ai but definitely nothing dramatic.

 @B2H27FGfrom Alberta  answered…3mos3MO

Only at the border. Track where crossings are happening outside of patrolled facilities and setup group, UNKNOWN - covert - that can stop it right then and there

 @B2F3YBDfrom British Columbia  answered…3mos3MO

No. AI is bad for climate and should be used as little as possible, the future of AI should be in the hands of the public, for the public to decide.

 @9ZDCX9Tfrom Washington  answered…5mos5MO

Yes, But only if the AI is used for information and detecting enemy attacks, it cannot initiate attacks/counter-attacks.

 @9ZD7HCGfrom New York  answered…5mos5MO

Yes, but not to the extent of the private market bubble, and only when technology has been heavily tested and found to be reliable should it be put to use

 @9YKVWYHNew Democraticfrom British Columbia  answered…5mos5MO

AI is difficult because its a new thing. It can be used for good and bad, so Its a case-by-case basis.

 @9YDGG9Xfrom Alberta  answered…5mos5MO

I don't believe artificial intelligence is a safe creation and will eventually take over all jobs, and society as a whole

 @9WQ5HHKfrom Ontario  answered…6mos6MO

It depends, if ai is being used to check if someone is who they say they are then no, however if it is being used to keep servers protected then yes

 @9W9M54Xfrom British Columbia  answered…6mos6MO

Yes but until AI becomes more advanced it should not be the biggest priority nor should we rely on it too heavily if it does reach that level of advancement.

 @9VMC949Liberalfrom Ontario  answered…6mos6MO

AI is not that competent in that field as of now so it might not be the best now but they could start somewhere

 @9VKMP3Gfrom Ontario  answered…6mos6MO

at the current stage of AI development that would lead to more harm than good, however in the future once the technology is more developed that could be beneficial.

 @9VJLT3Zfrom Alberta  answered…6mos6MO

It is inevitable that they will do this, but they should setup an AI ethics commission for general use of AI and be very careful in their usage of AI. It is a Pandora's box.

 @9VDRGX4from British Columbia  answered…6mos6MO

no the government should work on developing the humans brain instead of creating (AI) for defense applications.

 @9TMG6DRfrom Ontario  answered…7mos7MO

I don't have anything against AI, but AI could lead to problems like identity fraud and other major problems.

 @9TKHGCVfrom British Columbia  answered…7mos7MO

Hoping we all watched Terminator, we need to implement immense security and strict control measures if we do invest in AI defense. So yes, we should invest in AI defense, but keep a strict fist clenched over it.

 @9TJX597from British Columbia  answered…7mos7MO

Yes, provided it is under constant review by a third party government agency of bipartisans overseeing this.

 @9T9Y95Zfrom Ontario  answered…7mos7MO

yes but it should be limited and the ai should have a killswitch along with it being overseen by a human.

 @9NLQQF3from New Jersey  answered…10mos10MO

 @9NGY3VKfrom Alberta  answered…10mos10MO

I'm extremely iffy on it since I've seen and read too much horror stories about AI defenses. Try reading "I Have No Mouth But I Must Scream" and answer yes with confidence, I dare you

 @9NC8GVSfrom Alberta  answered…10mos10MO

Yes, but only if we can be sure that it isn't going to be used against the people.

 @9MV4GBFfrom Ontario  answered…11mos11MO

I think they should invest in it for certain uses like enemy detection software but if it can shoot on its own without a human there, I think it should be banned

 @9MT6HBBNew Democraticfrom Ontario  answered…11mos11MO

 @9MSLPDYfrom Ontario  answered…11mos11MO

 @9MNPFD4Liberalfrom Ontario  answered…11mos11MO

depends on how serious the situation, and if the AI has been carefully tested and works properly

Demographics

Loading the political themes of users that engaged with this discussion

Loading data...