a lot of people don't like AI and that leads them to claim that it can't possibly work, which is silly as they don't have any good reason to believe that and we know for a fact that human-level intelligence is possible because we've seen humans do it.
technically we don't know that superhuman intelligence is possible as we've never seen that before (although we have seen it in specialised domains, like chess, go, general recall and so on), but I have a hunch that there are machines that can think better than humans can as they aren't subject to the same design constraints, can be built from alternative materials, don't need to eat, their brain doesn't need to fit through a human pelvis, etc.
however even if we can only make a machine as smart as Einstein then that would still be pretty cool, I mean Einstein couldn't figure out quantum mechanics but it would be neat to have an Einstein available on demand to tutor you at school or handle your customer service requests or whatever it is you needed.
people who don't like AI also claim that it will destroy the environment, which is unlikely, not least because we know that AI doesn't need to consume more resources than people do and probably a lot less: you should be able to run a couple of Einsteins on your laptop and you're already using that now for sillier things.
another claim is that the companies currently pushing AI will lose money, and that's more plausible as companies lose money on big projects all the time; but it seems like a good outcome for everyone else? let overly optimistic investors fund the research and development of AI while we all get the benefit, that's great!
of course the ultimate fear is that AI works too well and the people who own it now end up owning everything else too, the smug bastards, but wealth disparity is a problem unrelated to AI and one that we should already be trying to fix right now.
it's important not to base your political activism on false claims as they can discredit your platform; the best reason for doing something is ideally true.
we have had ample warning that human-level machine intelligence is coming -- it was inevitable as soon as electronic switches were developed, and Turing's famous paper on the subject turns 75 this year -- but people have resisted the idea in the same way that they resist the implications of humans being assemblages of molecules that can be analysed mechanistically, a resistance that compromises their comprehension of the world and their ability to shape it.