In the early 20th century, people (notably in the United States) could conceivably have stopped the proliferation of automobiles by focusing on improving public transit, thereby saving enormous amounts of energy, avoiding billions of tons of greenhouse gas emissions, and preventing the loss of more than 40,000 lives in car accidents each year in the U.S. alone. But we didn’t do that.

In the mid-century, we might have been able to stave off the development of the atomic bomb and averted the apocalyptic dangers we now find ourselves in. We missed that opportunity, too. (New nukes are still being designed and built.)

In the late 20th century, regulations guided by the precautionary principle could have prevented the spread of toxic chemicals that now poison the entire planet. We failed in that instance as well.

Now we have one more chance.

With AI, humanity is outsourcing its executive control of nearly every key sector —finance, warfare, medicine, and agriculture—to algorithms with no moral capacity.

If you are wondering what could go wrong, the answer is plenty.

If it still exists, the window of opportunity for stopping AI will soon close. AI is being commercialized faster than other major technologies. Indeed, speed is its essence: It self-evolves through machine learning, with each iteration far outdistancing Moore’s Law.

And because AI is being used to accelerate all things that have major impacts on the planet (manufacturing, transport, communication, and resource extraction), it is not only an uber-threat to the survival of humanity but also to all life on Earth.

  • Vlarb@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    I agree that we need to slow down AI development and gauge long-term impacts on society with each major advancement. This will never happen, because no one trusts competitors to stop. There is no good faith or trust in this field, just pure competition. So they are racing into development head on, driven by greed and fear of not being first.