This post isn't a technical deep dive by any means — mainly because I'm not that qualified to write about the topic. But it is a good time to try to make sense of some of the AI related trends.
Before I venture too far, there's an important clarification that needs to be made: By AI, I'm not referring to the fully autonomous, human-like intelligence depicted in sci-fi media. This post is mainly on the generative-AI models that have been making waves online.
Naturally, the elephant in the room is ChatGPT — and as I was muddling through earlier drafts of this post, GPT-4 was released, boasting its high scores across a wide variety of standardized tests. The service has gathered so much attention that it has overtaken the number of the term "AI" in recent Google searches.
(If you're looking for a thorough technical explanation, here's one for ChatGPT.)
That being said, as potent as ChatGPT and other generative models are, these AI-powered products are only capable of performing a narrow set of tasks in specific conditions; at the time of this writing, they have yet to be able to conjure entirely original ideas without any sort of input or command.
For instance: While we can invent both from our memories or imagination, the current AI systems are merely "creating" by generating combinations from existing data — through models trained with an immense quantity of texts, image, or audios. These systems might seem to produce novel results, but that's a matter of our perception, as they are still bound by the input data and commands that we provide to them.
(Although, that does make for an interesting thought experiment: for a human raised in an incubator that's entirely devoid of visual and aural senses, what would the person do if he or she were to use pen and paper for the first time?)
Though, perhaps, these AI technologies don't really need to possess human-level creativity and intelligence to fundamentally reshape our world. If you've been following this topic at all, you know that the big changes are already here, and they should warrant more of your attention. And if you've read this far, my guess is that you'll be inclined to agree with what I have to say here.
What do I mean by "AI technologies don't need to be highly advanced to disrupt the society"? I have two observations to offer:
First, as the research on AI progresses, there will be more and more tasks that can be automated or made more efficient. In theory, even if those activities require a high-level of intelligence, in practice, however, it may not be the case. The human competency involved in those tasks only serve to achieve certain goals, but those goals could be achieved without requiring such competency in the first place. Take drawing, for example: an AI agent doesn't need the motor skills to control a pen or the mental capacity to hold an abstract thought, it just needs to be able to generate images that arrives at the desired art direction.
Second, we already struggle to judge authentic information from fake ones, and it's only going to become more challenging. Companies, political groups, and even organized criminals are sure to make use of AI systems to further take advantage of our weaknesses — so as to manipulate opinions and drive desirable behaviors.
In Tom Scott's recent video on this topic, he posed an interesting question:
"What if my brain is just a transformer system that's trying to predict what the next word is?"
If we were to define ChatGPT in simplest terms, then it is indeed a transformer system: upon receiving a prompt, it will produce reliable-sounding texts by predicting one word after another based on their probabilistic relationships. But what Tom is suggesting here is that, in some ways, our brain is functionally similar to ChatGPT — a thought crosses our mind and then we express them in a series of words. If you think about, this analogy is really not that far off .
As we keep going down that path, we will eventually come to this conclusion: if AI systems can produce seemingly comparable results as we do, then they will begin to displace more of the human workforce sooner than we think.
In other words, we won't have to wait until the arrival of full-featured robots that exceed human abilities, because even in these simpler forms, AI technologies can potentially threaten people's livelihood. Just as personal computers and the internet have redefined the meaning and value of labor in the past 25 years — whether it's organizations stumbling through digital transformation or online marketplaces impacting local employment — the mere fact that AI-powered tools can be accessible by everyone will be enough to upset the status quo.
I'll cite employer-worker dynamics to support this argument, since industrial automation has already displaced a significant portion of laborers: if these AI tools are even slightly more efficient than humans in some creative or knowledge tasks, then it's quite possible that AI tools will soon displace trained professionals as well. Why? Not because AI-supported workers or tools are all-around better at the job, but that people in charge of making employment or funding decisions could be swayed to believe that the quality of work is comparable.
(Although, in many cases, one could argue the quality of work is indeed comparable or even superior. Excluding compliance related reasons, why should firms hire junior editors at twenty times the cost, if ChatGPT can fulfill their responsibilities equally well?)
Note my use of the "displace" instead of "replace", and "efficiency" rather than "effectiveness" above: people in the future will have to periodically seek employment in positions that don’t involve the competition of speed against AI systems, and it will become progressively harder to compete with machines in tasks that they are optimized for. We will witness more and more AI-induced social changes as these technologies advance and are introduced into different sectors, many crafts and skills that are of value now will diminish over time.
For one, it costs years for us to master new skills and it takes generations to observe any sign of human evolution, while computers upgrade and ingest raw information at a far quicker rate — our learning pace is awfully inefficient compared to the growth potential of AI. We spend almost the first fifth of our average life span to just attain a minimum level of survival and communication skills, and we have yet to be considered employable at that age, both functionally and legally.
(K-12 education plays such an important role in our life, but given the amount of time and resource waste in schools, as well as the ineffective teaching I experienced, I do think education systems around the world need to undergo a significant reform to better prepare the future generations. Though, who knows if that's ever gonna happen...)
Ultimately, I don't think semblance to human intelligence is the right criteria to judge AI risks in the near future. As with any novel inventions, we should be vigilant of how these things can potentially harm those in the most vulnerable positions. The world would be a much better place to live in — for all of us, because aging and misfortune don't discriminate — if "how does this invention benefit those who are in need" is our foremost inquiry for evaluating new technologies.
Here are a few open questions that come to mind:
How can we make sure that AI doesn't further widen the generational gap of economic and social disadvantage?
What unresolved issues caused by previous waves of tech innovation will AI exacerbate? And how do we turn the tide?
How should we engage with the public and educate the younger generations, so as to allow everyone to stay informed on AI related topics?
If the rise of social media and cryptocurrencies have taught us anything, it's that reactive measures against the "innovate and grow at all cost" way of making progress will just leave too many big holes that can never be patched up (e.g., normalization of invasive data gathering practices). We will surely witness similar phenomenons with the advancement of AI as well.
It seems apt to end the post with this quote:
We may have realized it's easier to build a brain than to understand one.