Are we wise enough for AI?

Confronting Our Limitations in the Age of Rapid Technological Advancement

In partnership with

Technology is neither good nor bad; nor is it neutral

Melvin Kranzberg

Nuclear technology can generate clean energy and create devastating weapons. The internet facilitates global connectivity and also enables disinformation and, as we are rapidly learning today, AI has the potential for tremendous good and unimaginable harm.

In the year since the release of OpenAI’s ChatGPT, AI technologies have rapidly evolved, influencing various sectors from healthcare to education. We've seen AI systems generate art, write code, and even aid in scientific discoveries. At the same time, fears and concerns around the downside have risen, such as the risk of widespread job displacement and data privacy issues becoming increasingly prominent in public discourse. These risks will become more pronounced because we, the creators and users of these technologies, are simply not wise enough to fully grasp what we are unleashing onto the world.

Consider nuclear energy. Over 70 years since its discovery, and despite coming dangerously close to catastrophe during the Cold War, we are still grappling with its implications. However, unlike nuclear technology, which requires specific materials and expertise, AI is far more accessible. Imagine a disgruntled teenager having access to nuclear capabilities. The potential for disaster would be unthinkable. Yet, today, we are unleashing AI with little regard for the very real and potentially catastrophic risks it poses.

Yes, AI is a tool with significant potential upside, but this is contingent on it being used responsibly. The problem is that we neither fully understand nor can we agree on what that means. As a result, when it comes to AI, we are the greatest risk because we are not yet wise enough to manage this technology safely and ethically.

The Readiness Issue

As I see it, the risks with AI today can fall into two broad categories: technological readiness and human readiness.

Technological Readiness

AI systems today face several key issues: hallucinations, bias, and lack of explainability. There is also the question of unknown unknowns—the risks and consequences we don’t yet know about. However, these are solvable problems. Solutions can be engineered by improving training data, employing better techniques, and encouraging transparency in models and documentation practices. Given the amount of effort, attention, and capital being invested, it’s safe to say that it’s a question of when, not if, these problems will be solved. Can we say the same about human readiness?

Human Readiness

Think about the following issues: disinformation, job displacement, social inequalities, environmental impact, state overreach, safety, and security. The common factor here is that at the root of these problems is a human making decisions. Humans, as we know, are imperfect, especially in decision-making. We are biased, often short-sighted, subject to emotional states, self-centered, and we tend to disagree with one another quite often. These very human problems have plagued us throughout our existence, and unlike technological readiness, we have yet to engineer a solution for these challenges. In the context of AI, these manifest as follows:

We Lack a Comprehensive Understanding of AI’s Implications

As Kranzberg stated: technology is not neutral because it reflects the biases, assumptions, and agendas of its creators. How do we know what these are and what the implications of fueling these ideas with the power of AI will be on society at large? Today, we are already grappling with the very real threats of disinformation, job displacement, data privacy, and copyright infringement. Each of these issues alone raises massive questions that we don’t have good answers for. How many more issues will tomorrow bring, and what if these issues cascade and compound over time?

We Prioritize Profit and Power Over Doing Good

Throughout history, new technologies have often been driven by motives like defense, pornography, or financial gain. For instance, the internet initially saw significant investments in military applications and commercial exploitation before broader societal benefits were pursued. This highlights our focus on immediate, self-serving uses rather than ethical progress. If anything, we are seeing more of the same today with the frenetic pace of AI development.

We Don’t Always Agree on What Good or Responsible Means

We talk about responsible AI, but who is defining what responsible means and how? Different stakeholders have varying perspectives on what constitutes responsible AI. A government might prioritize national security and public safety, leading to increased surveillance, but privacy advocates might see these uses as invasive and unethical. Corporations might focus on mitigating risks that could harm their reputation, yet their definition might still allow for practices that prioritize profit over broader societal impacts (e.g., targeted advertising that exploits user data). Furthermore, cultural and regional differences mean that what might be considered responsible AI in one country might not align with the standards of another.

Playing catch up… and losing

"The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most"​

Eliezer Yudkowsky, Founder, Machine Intelligence Research Institute

Achieving AI alignment is an incredibly complex task, and the questions it raises force us to confront the essence of our humanity. This technology very likely represents a moment of reckoning for us. We have to figure out and agree on the answers to the hard questions; like what do we mean by ‘good’ and ‘fair’ and what we should learn to value and protect as a species. The rapid pace of AI development makes this an impossible task because it means we're perpetually playing catch-up and we don’t seem willing or able to give ourselves a moment’s pause to check and see if we are headed in the right direction or into the abyss.

The optimists among us love to cite the industrial revolution to support their view that things won’t be as bad as we imagine. However, they never seem to want to account for the scale and speed of AI’s impact on society. The exponential growth of AI is happening almost overnight and the interconnectedness of today's world will amplify its impact, resulting in second- and third-order effects that we can't yet foresee. This will not be a gentle wave sweeping over the shore. It’ll be a tsunami.

So what can we do?

Regulation would naturally seem to be the obvious answer. Heck, even Sam Altman called for it. However, this won’t work for two reasons. Firstly, we cannot expect AI developers to self-regulate for reasons detailed above. Secondly, as tempting as it may be to impose regulation, even on a pre-emptive basis via measures such as strict liability for AI developers, this will still involve spending time we don’t have to overcome regulatory coordination challenges we still struggle with.

The prognosis is not good. The truth is, as a society, we must evolve in terms of maturity and wisdom. Unfortunately, we only seem able to do this after experiencing and learning… from painful lessons.

"We have paleolithic emotions, medieval institutions, and god-like technology. And it's this mismatch which is dangerous and creates the potential for human disaster."

E.O Wilson

Thanks for reading,

Hardesh.

Have an AI Idea and need help building it?

When you know AI should be part of your business but aren’t sure how to implement your concept, talk to AE Studio.

Elite software creators collaborate with you to turn any AI/ML idea into a reality–from NLP and custom chatbots to automated reports and beyond.

AE Studio has worked with early stage startups and Fortune 500 companies, and we’re ready to partner with your team. Computer vision, blockchain, e2e product development, you name it, we want to hear about it.