Why I'm Not Afraid of ChatGPT
A somewhat rambling post about AI, what it means to be human, and where our power lies.
One of my favorite yoga teachers says that we should "…use technology, but not let it use us." I believe that this is one of the biggest challenges we (humanity) face in the coming years.
But as challenging as it is – and as a parent of teens, I don’t discount that challenge one bit – there is a very positive and encouraging side to this story that doesn’t get told much. Of course I recognize the dangers inherent in AI, especially while we live in societies founded on monopoly-state powers. But I’m not afraid at an existential level, the way many seem to be.
I'm also not afraid of the "alarming" predicted job losses due to automation. Why not?
This "alarm" has been sounded too many times throughout history, from the Industrial Revolution to the arrival of the PC. And what the folks promoting it miss is that when the existing landscape changes, a new one emerges. Let's say all manufacturing jobs were taken over by machines. It doesn't then follow that the people who lose their jobs to machines are left with nothing to do to support themselves. (The story of Flint, MI, is a good case study in this.) The landscape has shifted, and there are now new opportunities - my own opinion is that, as physical production has now become cheaper, those new opportunities will be more in the creative and entrepreneurial realm. But my point is simply that we've seen this kind of fearmongering before, and it hasn't panned out. When one thing changes in the world, so do a lot of other things - in ways we cannot now predict.
This last point is actually really really important. Because the people who want to control every aspect of our lives use this uncertainty to do it. They will tell us that, because we can't say with certainty exactly HOW things will work out on their own, they should therefore step in and control things - never mind that their plans for centralized control ALWAYS fail, and with devastating results. This idea that we have to be able to spell out, with certainty, how things will work out absent some all-powerful overlords managing everything, is pernicious and deadly.
Here's the way I see AI's influence on our world unfolding:
I think that in the near future (next generation or so), societies will become more and more divided into two groups - "classes", if you like: Those who are capable of thinking and acting rationally and independently, and those who can only follow orders and regurgitate what they have been told. We're divided in this way already, to some extent, as we've all seen these past three years.
There's been this idea for a long time in science fiction that the rise of intelligent machines, and the capabilities of genetic engineering, would create a class division between those who have access to these technologies and those who do not. That those who can afford "designer babies" or who can afford the best AI tools, etc. will easily out-compete the rest of us. And I guess there's some truth to that.
But here's what that kind of thinking misses:
What makes human beings great - and what has made us so successful as a species - is NOT just our ability to make and use tools. It is NOT just our capacity for (endless?) improvements in efficiency, productivity, and quality of life. What makes us who we are are the things that can't be generated by machines, but can only be immitated by them. Things like: Our imagination, our passions, our emotions, our capacity to think independently, our loves and our hates. Really, it's our connection to God, however you experience or interpret that.
We've spent the past three years being shocked by the number of intelligent, well-educated, capable adults who were easily swept along into the biggest – and to so many of us, most transparent – psyop in our history. We've been baffled by how many of these people, many of them smarter and/or better educated than ourselves, were so easily fooled.
Many of us have remarked that it's not a question of intelligence. That some very very smart people got fooled by the Covid mind games. It's certainly not a question of education, as some of the most outspoken opponents of the lockdowns and forced medical interventions do not have advanced degrees. (I don't even have a bachelor's degree.)
So there's something else at work. There's some other quality that made us able to see through all the BS, to evaluate information as it came in, even to change our minds when new information came in. Whatever that quality is, it is something very special, and something that gives those of us who have it, a tremendous comparative advantage over those who don't. Even if those who don’t constitute the vast majority of humanity.
So when I see people getting worried about AI taking their jobs, or taking over the world... I don't dismiss those concerns, but those concerns are only one half of the picture. The other half is that we have something very precious, and I'm going to also say very valuable. Something that those who become dependent on AI to do their thinking for them will lose.
The way I see it, the most important distinction between people in the coming years and decades, will be the extent to which they can think and feel for themselves. The extent to which they do not allow themselves or their children to be turned into living robots. Those who retain this ability will help to shape how the world is for generations to come – and for good or ill, depending on who is involved – and the rest will be a second class of followers. I hope I'm being overly dramatic here. But however dramatic the distinction ends up being, it will be significant. IT ALREADY IS.
Those of us on this side of the divide need to change up our thinking. Instead of being fearful of what's coming, or dismayed at how many around us are willing, unthinking, victims of it, we need to recognize the profound advantage we have. We need to cultivate it, and help others to cultivate it. We can do this by building and supporting alternative education – which is experiencing a renaissance right now, thanks to the sheer insanity of schools' responses to Covid-19 – and by working to create parallel services that are NOT tied in to the IOT/IOB.
We are in a conflict between the mechanistic paradigm of what it means to be human, and the human paradigm of what it means to be human. Not only are we on the morally right side of this conflict, we are also on the winning side. Why? Because the mechanistic view is not sustainable. And people will come to realize that. They will come to find that they are hungry – starved – for something else.
We need to be ready to give it to them.
The technocrats always fail. They can cause incredible destruction on the way, but they always fail. There is a reason the archetypal villain is the mastermind and not the hero.
Having worked in IT for 40 years I can tell you that the people that do the technical work since the "remote control" technology was created, were let go from well paying, satisfying jobs to scramble and instead, work sweatshop level hours for some giant bemouth. (Working at Twitter is not the norm)
EVERY time there's a way to use technology to reduce the labor cost and or swallow up a competitor is has been done.
Finally I can assure you that today's level of competence of technology, barring the "claims" of ChatGPT (all I hear are claims...it makes business plans AND codes the work for the week and the code "works". I'm calling BS) ---- is JUNK.
Unreliable junk.
PS how about the "quality" of todays alleged "Customer Support" from a company? We've gone from actual humans that answer the phone to uncooperative press 1 for this 2 for that (no Zero for operator) and then you're on the phone with someone in another country or not...but they're all reading from scripts. Big companies, IRS, you name it. The "auto answer" phone system was the worst thing ever.
Meanwhile people in the midwestern states resorted to Meth Labs due to no work...a book was written on this.