Atlantis
10-26-2023, 03:43 AM
Here are two arguments for why AI research should be stopped.
First, I will claim that AI research is ultimately functionally equivalent in terms of potential consequences as genetically modifying an animal that currently is less intelligent than ourselves to become vastly more intelligent.
One might argue that these scenarios are different because the latter is like "playing God," while AI research isn't. But just because there's a taboo on genetically modifying animals doesn't mean that the consequences of both are equally bad. If there's another animal that is not just a little smarter than human beings, but way smarter, that's an immediate threat. Why wouldn't the presence of a silicon-based intelligence paired with a robotic body be any less of a threat? In the end, it doesn't matter WHAT the entity that is way smarter than us is, or how it's made. All that matters is the fact that it IS smarter, and that in and of itself is a threat. If you can't see this, then look at the way we treat other animals. We are sometimes nice to them, but they are still at our mercy. That's how we'd be at the hands of a greatly superior intelligence.
Secondly, I claim that AI research is eerily similar to a scenario that sci-fi enthusiasts have endlessly debated. That is, if there was an alien civilization that came to Earth that was vastly more advanced than ourselves (and any that could make it here from another star system almost certainly would be), would it be friendly or hostile? Safe to say, there were many great thinkers who believe that they'd be hostile, including Stephen Hawking.
What I'm trying to get at is that by creating a superintelligent AI, we are basically turning this hypothetical scenario into a reality. We'd be literally creating the functional equivalent of these aliens right here on earth!! Maybe the AGI would be nice, but what if it isn't? Bad idea, to say the least?
Also, what is the worst case scenario for AI? A really horrible dystopian one.
In fact, the worst case is so bad that the only reason for justifying the development of a superintelligent AI is that there are convincing reasons that it is necessary for human survival or that we need it to "save us" from our own short-sightedness.
But there's no reason to believe that a superintelligent AI will save us, and even if there was, we'd just end up having to live with the decisions of a force above our own and human freedom would be severely curtailed if not eliminated. And there are good reasons to believe that it will destroy is.
That's why I believe AI research should be stopped.
Even in the non-worst case, it'll just be used to kill jobs and by the military to more efficiently kill people.
Why should we create the very aliens that very well may destroy us? Why not at least let the actual aliens find us first and at least make it harder for them?
First, I will claim that AI research is ultimately functionally equivalent in terms of potential consequences as genetically modifying an animal that currently is less intelligent than ourselves to become vastly more intelligent.
One might argue that these scenarios are different because the latter is like "playing God," while AI research isn't. But just because there's a taboo on genetically modifying animals doesn't mean that the consequences of both are equally bad. If there's another animal that is not just a little smarter than human beings, but way smarter, that's an immediate threat. Why wouldn't the presence of a silicon-based intelligence paired with a robotic body be any less of a threat? In the end, it doesn't matter WHAT the entity that is way smarter than us is, or how it's made. All that matters is the fact that it IS smarter, and that in and of itself is a threat. If you can't see this, then look at the way we treat other animals. We are sometimes nice to them, but they are still at our mercy. That's how we'd be at the hands of a greatly superior intelligence.
Secondly, I claim that AI research is eerily similar to a scenario that sci-fi enthusiasts have endlessly debated. That is, if there was an alien civilization that came to Earth that was vastly more advanced than ourselves (and any that could make it here from another star system almost certainly would be), would it be friendly or hostile? Safe to say, there were many great thinkers who believe that they'd be hostile, including Stephen Hawking.
What I'm trying to get at is that by creating a superintelligent AI, we are basically turning this hypothetical scenario into a reality. We'd be literally creating the functional equivalent of these aliens right here on earth!! Maybe the AGI would be nice, but what if it isn't? Bad idea, to say the least?
Also, what is the worst case scenario for AI? A really horrible dystopian one.
In fact, the worst case is so bad that the only reason for justifying the development of a superintelligent AI is that there are convincing reasons that it is necessary for human survival or that we need it to "save us" from our own short-sightedness.
But there's no reason to believe that a superintelligent AI will save us, and even if there was, we'd just end up having to live with the decisions of a force above our own and human freedom would be severely curtailed if not eliminated. And there are good reasons to believe that it will destroy is.
That's why I believe AI research should be stopped.
Even in the non-worst case, it'll just be used to kill jobs and by the military to more efficiently kill people.
Why should we create the very aliens that very well may destroy us? Why not at least let the actual aliens find us first and at least make it harder for them?