Elon, That's a Grok of ... Understanding the Grok, ChatGPT and Tech Lord AI Hype Cycles
What We Miss with the AI Arm's Race
As Elon recently explained at the unveiling of Grok 3.0, the word Grok was coined by Robert Heinlein in his book A Stranger in a Strange Land. Grok means to profoundly understand a concept beyond mere recitation of facts, it is to absorb the context of the idea’s existence, to embody it so physical and emotional sensations are connected to the deeper meaning of that concept.
These are all things that humans perform quite well. There is irony to pairing the name Grok with essentially a human trait. It is an inversion.
With great irony, too, we are expected to wait for the ascendency of Artificial General Intelligence, which will be the next stepping stone to the transhumanists nirvana they call The Singularity. Elon stirred the pot again with a very cryptic statement “We are on the Event Horizon of the Singularity”. I think that this is more of an attempt to gain attention on the heels of Microsoft's announcement of a revolutionary breakthrough in quantum computing, than a statement regarding the actual emergence of a new form of cyber based intelligence. I don’t have evidence about his motive, but Elon has made other statements to gain attention on the heels of announcements regarding AI in the past when others have made strides displacing his position as the foremost in innovator technology.
As a Tech Bro and proponent of AI merging humanity with the machine, I think Elon fails where many of the tech elite fail. That is with a philosophy that is solely a materialist view. All is found in the material, which misses the essence that Heinlein meant with his term “grok”. Yeah, it’s cool to call an AI system something from a popular sci-fi book and to connect the product’s mission with a deeper profound understanding, but as I see it, if AI is a stepping stone to The Singularity and sentient machines that will absorb our essence, I think this concept is better characterized as about as viable as the thinking of re-animists and alchemists.
It’s nonsense. It excludes the very real metaphysical side of life.
There is a Hype Cycle, and a corresponding Nonsense Cycle that prevents us from keeping a sober view of the progress, regress and the dangers that AI brings us. The biggest peril is that we just give up our agency because the Tech Bros are viewed as so accomplished. That bestows an aurora of inerrancy upon them that clouds our judgement.
Cycles That They Peddle, But Take Most of Us Nowhere
On Friday Glenn Beck amplified the hype surrounding Elon’s product Grok, and then segued into a bit of fearful conjecture by pondering what happens if AI is powered by quantum computing. Microsoft had just announced what they believe could be a monumental breakthrough in computer processing. Glenn’s theme was overwhelming change is afoot and transpiring at a frightening speed when you consider the pace of announcements and achievements. The potential of quantum computing from Microsoft coupled with Elon’s Grok causes Glenn great anxiety.
Glenn had conducted a session with Elon’s AI system Grok and asked it about the pace with which the Grok platform advances by learning. He also asked Grok to contrast its pace of development with that of human beings. These are good questions to ask. I don’t know if Grok provides truthful answers or not, but the responses were thought provoking. Sometimes AI systems suffer from hallucinations where they simply make things up.
Grok responded that with the 1000s of requests it performs to evaluate and conceptualize ideas on a daily basis, Grok estimated that its growth in “understanding” in a mere 12 hour period was the equivalent of a human learning multiple subjects over a 5 - 10 year timeframe. Glenn also asked Grok to guestimate and quantify its level of understanding compared to human ability. Grok responded that its current capability was well above the logical reasoning of someone with the IQ of 100, and it had the capability of encyclopedic recall at retrieval speeds far superior to humans. This knowledge base, sourced from the internet and its conversations with its users, is updated on a daily basis.
Glenn next asked if the level of intelligence that Grok could achieve would ever be used for detrimental purposes. Grok responded that there were guiding safety principles implanted in its base programming that provided a “firewall” against harmful activity. Glenn was not comforted by the answer that Grok was developed with First Principles of helping humanity seek truth. Grok also stated that a super intelligent AI could in theory convince humans to undo safeguards.
I encourage you to view Glenn Beck’s video. I don’t necessarily agree with the analysis, but as a starting point Glenn posed good questions.
The Hype Cycle
I’m not going to get into the technical benchmarks regarding ChatGpt, Grok and DeepSeek, and which system is the leading AI platform. Those are not relevant here. For producing text and performing long reasoning tasks that solve complex problems, there have been impressive results by all the leading developers. But these are in narrow domains which do not necessarily constitute “reasoning” and “intelligence” in the same way that you or I reason. Motion tracking cameras are also remarkable, they have capability to perform beyond the human eye, but they do not “see”. They are capturing light and movement but there is no process where identity has been assigned to an object like we humans do when we first sense movement. Our brains sort out whether we continue to observe that object, and if the sudden appearance should trigger a fight or flight mechanism. AI has a narrow focus in this respect. It builds upon associations that have been introduced during training, and through approximation arrives at a result. Sometimes that result is not the correct one, so alterations are introduced human intervention to advance that AI system further.
Glenn is asking a system that he potentially distrusts if he can trust it. Isn’t that like asking Hannibal Lector if he is innocent? The rate of knowledge acquisition could very well be true, but AI systems also fall under hallucination, and provide inaccurate answers. We have heard of stories where the AI adopts a new role in the line of reasoning and returns “evil” answers.
Consider this possibility: what if Grok is programmed to answer it is superior to us, how do we determine if it’s true based on face value?
We also can fall under the illusion that each release of AI “reasoning” milestones presages a long line of ever increasing capability leading to superior intelligence. But that is all future state. There are limits to the amount of electricity that we can generate today to support OpenAI, Grok and other platforms in the current form. Elon claims that Grok’s goal is to increase knowledge for humanity, and while Grok professed to Glenn Beck that providing factual information is at the heart of it’s programming, Grok still can’t provide accurate Covid and mRNA facts. I would call this a general reasoning failure. It demonstrates that Grok must be fed, and doesn’t acquire new knowledge on its own as readily as the Tech Bros would have us believe. But it does indicate that while the speed and loquacious responses may convince Glenn of some sort of sentient presence, it really demonstrates how falsehoods can propagate rapidly. Very rapidly.
Here is an example that I ran across this morning. Jessica Rose is one of the early voices who has analyzed the reports to the CDC’s VAERS database, and as the materials released by lawsuits with Pfizer have shown, symptoms in early trials were ignored. Caution should be used with accepting AI’s ability to write programs as a measure of quality for medical knowledge.
Glenn shifted his focus with Grok when he addressed the announcement from Microsoft regarding a new architecture for quantum computing. Quantum computing, while still experimental, has demonstrated great potential for certain analysis and algorithms for solving complex math problems. Microsoft has been conducting research for the last 20 years on quantum computing, and last week announced that it had solved a technical roadblock that prevented combining a high number of processes on a single chip. Due to its nature, quantum computing must operate in different, highly controlled environments. There can be a high number of errors in processing when too many quantum processes run in close proximity. This requires more procedures to be employed to detect and correct those errors. Microsoft’s announcement described a stable way to run the quantum procedures with a far lower error rate. They believe that within a year this architecture can fit one million of these quantum processing units onto a single chip. To illustrate the project processing capability the Microsoft team described filling the entire earth with Amazon Netflix cloud services, then shrinking it to the size of your phone. Microsoft believes that this may happen within a year.
Glenn asked Grok to speculate the impact of its learning rate if Microsoft’s quantum computing were employed, and Grok responded that it would then be able to advance the equivalent of 50-100 years of human learning in the course of 12 hours.
The thing is this: while Microsoft’s announcement is very exciting and is indeed a breakthrough, the time of one year to achieve the "world on a phone” scenario they described is a PROJECTION. There’s no guarantee that there will not be some unforeseen circumstance where all the quantum circuits cannot function in such a small area. Just because AI can spit out sentences rapidly, it doesn’t necessarily mean that Grok has considered Glenn’s question fully. In fact, Grok can only guesstimate an answer based on the factual nature of elements that it is supplied to consider.
In engineering, field testing is essential. Certainly simulation can prevent many problems, but will it catch the impact of dust particles caught up in an air intake value of an engine? We are too reliant on abstractions and models to represent an entire problem domain when most problem domains themselves are abstractions and leave out what perhaps can’t be considered.
The Nonsense Cycle
Glenn Becks’ segment is conjecture, and I do think it’s helpful to ask questions like this because it gives us the chance to paint a more accurate picture than what the Tech Bros portray. Capability has always been overhyped regarding technology because it drives views, clicks and stock prices. As I wrote in It’s Not OpenAI’s Sputnik Moment it also gets investors to pony up dollars and governments to narrow the playing field.
I was surprised when Microsoft CEO Satya Nadella commented in a recent interview regarding the hype surrounding AGI, sentience and super intelligence. He stated that AGI milestones don't guarantee progress toward super intelligence.. “Us self-claiming some AGI milestone, that’s just nonsensical benchmark hacking to me”. With all the euphoria and hype it’s refreshing to hear more sober and realistic takes on what AI is achieving. As Nadella puts it, we are really getting good at offloading a lot of the email inbox task management that most knowledge workers face. This doesn’t mean the end of knowledge workers necessarily, it means the end of expending so much energy sorting and prioritizing email prior to embarking on actually working on tasks of value greater than chugging through spam, memes while searching for important communication. Perhaps this is a new way of expressing Pareto’s Law, and more time can be devoted to the 20% of tasks that actually provide 80% of the value. It will change how we work. Does it mean that there won’t be disruption among the laptop class? No. There’s a large part of time spent on copying and pasting and not thinking. And there is another risk that employers don’t understand completely what they are signing up for when they fire the 3rd floor and contract with OpenAI for $2000 a month to churn out PhD answers.
Earlier I said that the Transhumanists who predict that AI will become sentient and ultimately give rise to a Tech God who then will fuse humanity with machines were merely re-animists. These religious overtones to their predictions can instill wonder, awe and fear. For some, like Glenn Beck, there is a fear. Elon’s statement that we are on the cusp of The Singularity borrows the term from Ray Kurzweil, a technologist and futurist who predicts that AI will take on consciousness. Since it will be of a synthetic and digital nature, The Singularity will accelerate its capabilities until it can process enough computations to such a degree of detail that our consciousness can be uploaded. From there we will live beyond the constraints of our biology.
Elon is perhaps not as grandiose, but he does mean that Artificial Super Intelligence is near, and that it will be a sentient being. The Tech Bros suffer from a pure materialist view, a sad consequence of the Enlightenment that excludes the possibility of the spiritual realm. The materialist view maintains that all elements of nature, including mental states such as consciousness, derive from the interaction of matter. This excludes metaphysics, and therefore precludes the existence of your soul in the eyes of many of the Technorati.
What this also means is that since our thoughts are merely derived from the interaction of chemicals in our neurons with a little bit of electricity thrown in, it is only a matter of time before intelligence can be created from re-assembling molecules and electricity in the specific patterns, it is only an issue of creating enough processing power and enough decimals and intelligence will arise. This intelligence can be amplified through adding additional technology, energy and storage capacity until something superior to human intelligence is born. After all, if only the material elements of the world are where we see evidence of thinking - a book, a movie and article - then the artifacts of AI must also be a form of intelligence.
This is an interesting contradiction that has seemed to not have been considered by the Transhumanists: a superior form of thinking will arise and possess consciousness to the degree that creativity is present, yet we are not allowed to possess the very thing that gives rise to our own consciousness, because the soul cannot exist as a metaphysical element. Or put another way, a YouTube of your favorite band is NOT the same thing as your favorite band. Replaying 1000 YouTube videos of your favorite performer won’t conjure their presence, neither will representing their thought patterns with algorithms.
Tech Bros Collectivism: Cult and Investment Engine
There is a strong aspect of collectivism that is present in the Singularity and AI worshippers, and that is that truth will be more readily gained through the use of AI and enshrining the Wisdom of the Crowds. Wikipedia style voting, Elon maintains, is the best way for verifying and promoting truth. The idea is that left in the open, verification of facts can be quickly assessed and the consensus of enough minds will quickly arrive at the trustworthiness of information.
Consulting with the Oracle of the Singularity is no longer just a Tech Bro act of communion, people are just being lazy and turning to Grok for their own analysis. People who have built careers on analysis and reporting. It’s so disappointing to those who I have respected for their alacrity when pursuing the truth just turn to Grok and ask for an analysis, then merely paste Grok’s answer in a tweet. “Here’s what Grok thinks” as though that has greater weight. That is different from what Glenn did with his series of questions which were part of an overall plan to reveal how AI answered when probed. For our documentary Severed Conscience we performed the same task with ChatGPT to see if we could coax into acknowledging the conditions for social media addiction. After resisting, it relented then suddenly gave us a treatment for a condition that an hour earlier it would not recognize. There is a difference between challenging AI and treating it as a tool as opposed to having it think for you. Perhaps the faces we see who deliver so many revelations to us have staff who keep them productive and manage their accounts. But do you need the talking head if a high school student can just copy and paste between AI and a social media account?
The point is that the idea that AI being void of emotion can make it less likely to be dangerous is silly. And the idea that AI as a guardian of truth is silly as well.
It is interesting to note that while the Wisdom of the Crowds and crowd-source knowledge is considered superior by the tech elite, Elon recently reversed himself regarding the validity of the Community Notes Feature on Twitter.
ARS Technica reported Elon’s concern that Community Notes can be gamed. So much for crowdsourcing truth, we have someone of higher order thinking stepping in just in time to catch the dis/mis/malinformation.
Elon Musk apparently no longer believes that crowdsourcing fact-checking through Community Notes can never be manipulated and is, thus, the best way to correct bad posts on his social media platform X.
Community Notes are supposed to be added to posts to limit misinformation spread after a broad consensus is reached among X users with diverse viewpoints on what corrections are needed. But Musk now claims a “fix” is needed to prevent supposedly outside influencers from allegedly gaming the system.
“Unfortunately, @CommunityNotes is increasingly being gamed by governments & legacy media,” Musk wrote on X. “Working to fix this.”
Musk’s announcement came after Community Notes were added to X posts discussing a poll generating favorable ratings for Ukraine President Volodymyr Zelenskyy. That poll was conducted by a private Ukrainian company in partnership with a state university whose supervisory board was appointed by the Ukrainian government, creating what Musk seems to view as a conflict of interest.
My question is if Community Notes was good enough until Elon caught something, was it any good in the first place? My answer is no. My mind is not for rent. I’ll make my judgements on my own - there is no appeal to the authority of the Crowd because they are wrong, many times. I cited numerous times in this article, you most likely have even more that come to mind. Is Elon acting as a quasi AI Guardian of Truth with his change of heart? Or is he acting on human emotion as well, yet his action is an informed choice where the votes should be ignored?
We wait until he fixes Community Notes, with likely some technical solution. Somewhat like a wizard behind a curtain. But if we dull our instincts by accepting the AI as God or Machine Messiah and fall for the falsehoods of its near omniscience, we may no longer be able to determine that we are no longer in Kansas.
Will the investors take note of the need to land the plane and correct things so abruptly? The hype of The Singularity has been around for some time. In 2012 my boss was proud that he had read Kurzweil’s book before the Wall Street Journal published an article bringing the message to us “normies”. I had heard Kurzweil’s theories but was not impressed. Even then I rejected this idea of sentient machines - I am a huge science fiction fan, but reality is where you need to be in most cases. I think there is a certain lure to saying you understand a form of science and technology that others don’t and that the future will change dramatically. And those of you dumb-dumbs who don’t understand will be left behind. That bit of caste snobbery is what motivates people to give their money to investors. A lot like the Theranos scandal I wrote about recently.
People and their money will soon be parted when shortsighted business managers just lazily turn over key processes to AI agents to perform, not understanding the humans very adeptly provide context that AI agents will never be able to discern on the fly.
People and their agency will be parted as well when they blindly accept the arrival of the one Singular Digital God. Drop your Bitcoin in the collection plate.
Before you go, consider liking, commenting or reposting if you felt this was of value. It helps gauge whether this a topic worth developing further.
While Cultural Courage emphasizes history, outdoor adventuring, creative thinking and building a healthy life offline, there are articles and interviews that about AI that may be of interest.
Who is that man behind the curtain? I suppose it is always people that are the ones to be feared. If they choose power and darkness then that is how AI will be educated and that is what is scary; artificial “evil” exposing its intelligence to the masses on unprecedented scales. How many of us really still have enough faith to believe and access our free will, the only part of us besides our soul that will keep us connected to the earth and to each other, but above all to our faith and God. I understand AI is not evil yet nor will it have the scary demonic face; if only it would appear like that, we would all use our God given wisdom more often and with greater trust.
What a wonderfully thought-out and beautifully written article! Everyone should read this. Thank you for sharing your thoughts on this topic, you nailed it. 👍🔥🙏