Viva Aviato!
Is There a Slippery Slope of AI Idiocracy at Our Bureaucracies? What We Could Learn From Mike Judge’s Silicon Valley and Erlich Bachman
The protests in support of Hamas organizations which have included the occupation and destruction of Columbia University property, threats of violence against Jewish students has escalated with the occupation of Trump Towers. This was in response to the ICE detaining Muhammad Khalil and a more aggressive response by the State Department.
Last week Axios had a report regarding the State Department’s announcement that it would be using AI to review social media accounts of foreigners to detect potential terrorist activity in conjunction with the protests we see spreading from our campuses nationwide.
We are going to use black humor, irony from Mike Judge’s series Silicon Valley to illustrate how we are yielding our agency so easily. No worries if you haven’t seen it, I’ll introduce you quickly. And to illustrate the hype of the rockstar status of AI, you are going to be a member of my project team that takes on this assignment but with a few twists. It’s going to be a fun thought experiment, and you’re going to learn how to put the AI hype into perspective. And hopefully understand some of the dangers of overinvesting trust in pure technical solutions.
A few things struck me about that announcement. For one, there seems to be a collective sigh of relief that AI is now the solution because humans just can’t handle all the complexity that life has to offer. Another element that snagged my attention was the fact that while AI is offered as a part of the boost in confidence in the efforts of our government and an attempt to glean some coolness off the DOGE efforts. And as a technologist, my brain went into overdrive with solutions based on existing techniques and technologies that could accomplish many of these tasks without hooking Grok up to our government data. We think of AI as an alien intelligence with capabilities that haven’t existed before while it is really more efficient at a few things, but not omnipotent.
Why do I piss on the parade? Because when we are so enamored we can be lied to, we just allow ourselves to think that technology will absolve people from the responsibility of just doing their jobs. Yet if they did their jobs, we wouldn’t need AI to save the day.
Silicon Valley’s Visionary: Erlich Bachman
Do we really need AI at the State Department, or do we just believe in the AI Idiocracy worship of Silicon Valley Evangelists? I put it in those terms because shockingly you can learn a lot about our very recent rush to embrace all things AI and become Singularity groupies by watching Mike Judge’s fantastic HBO comedy Silicon Valley.
In fact, a main character of the series Erlich Bachman had a product called Aviato that performed many of the functions that the State Department’s AI solution would entail as implied by the Axios article. The irony is that Bachman’s portrayal of his product was pretentious, highly exaggerated, Aviato didn’t do all that well financially yet Bachman thought it put him on par with Steve Jobs. It was brilliant in Bachman’s mind, but as we’ll see his self image was akin to cobbling together garbage cans yet thinking you had constructed the next Iron Man suit. Powered by solar energy.
This is Erlich Bachman. Think of him as a 6’ 2”, gelatinous version of Danny Bonaduce or a ginger haired Jonah Hill with an ego that ever expands beyond the limits of the room he is in. His hyperbole is mixed with faux Shakespearean quotes, he’s a lecher, and an insufferable poser who is a visionary while staring into the depths of his bong. He wears a T-shirt that proudly proclaims I know H.T.M.L - How To Meet Ladies”. A handsomely roguish figure of the tech world is how he describe himself while wearing an overcoat that barely covers his paunch and grey socks with sandals.
Bachman understands bullshit because he slings bullshit. To Bachman, true success is pairing ego, bullshit, and out of the box thinking with technology and luck and you have the formula for success. And in particular, it needs HIS flair for the dramatic. But that means someone else has to do the work, because Bachman has to meditate before he romances more investors at a mixer and then, after failing to romance the ladies, he needs to fire up his bong by his pool.
Bachman’s self professed genius was not his technical craft, it was his ability to mentor others. That is why he runs a Silicon Valley incubator. Out of his home. When he charges rent. You get a desk in his dining room. But it’s great because you can soak up all the energy from the fellow technologists who share the fridge with you. But don’t touch Bachman’s quinoa.
Madam, you do not call a man a fool on the transom of his own home. A home that happens to be the world headquarters of a company keeping streaming video of a man who's about to drink his own urine online for tens of thousands of Filipinos. Does that sound like foolishness to you? So you can tell your clients, respectfully, that they can go fuck themselves.
Okay, which one of you MONSTERS put my artisanal butter in the FREEZER?! Mother FUCK!
I told them I’m pesca-pescatarian. That means that I only eat fish that eat other fish.
I am the founder of Aviato. And I own a very small percentage of Grindr. It's a men to men dating site where you can find other men within 10 miles of you interested in having sexual intercourse in a public restroom.
Aviato, or as Bachman pronounced it “Ahh-vee-ahh-tō” with pseudo pan-European flair as though he were selling a wine or cologne, was stupendous software platform that monitored social media for the people commenting on Frontier Airlines and would then assemble information regarding flight information in that specific area. Bachman claimed to have sold the concept for a “low seven figures”. All this was murky, but that doesn’t matter as Bachman constantly parlays his “mystique” to gain further leverage when introducing venture capitalist firms to his gang of genius rebel technologists who live and thrive in his bungalow - sorry, I meant to say his “incubator”.
Bachman and the other characters embodied the absurdity that many of us encounter in business where egos remain unchecked, where people get desperate for success and common sense takes a back seat to visionaries who work the Steve Job’s Effect of weaving tales of a fantastic future. The fruition of that vision is just around the corner if only you invest more money so the demo product can be scaled. Success is only a matter of time. But the window of opportunity is slamming shut, so act now.
Go on YouTube today and you can barely distinguish the hype you hear from Bachman’s comical elevator pitches. I want you to compare three statements and see if you can guess who made which statement, Erlich Bachman, Elon Musk or Julie McCoy, an actual evangelist for AI.
Quote 1
The age of artificial scarcity is ending.
The age of engineered abundance is here.
And you're not too late.
You're right on time.
Quote 2
Since the dawn of time…Mankind hath sought to make things smaller.
Quote 3
We are on the event horizon of the singularity.
Quote 2 is Bachman’s quote, but it’s not too far afield from the other quotes. There are a lot of breathless pronouncements that cloud our thinking. Engineered abundance? Is that different from the abundance produced by the engineering of the Industrial Revolution? Is the Singularity vested with such power that it will pull all things toward it metaphorically?
So because Elon or Julie says something makes that concept not bullshit? Silicon Valley illustrates with great humor the absurdity that arises when people believe that their aura brilliance lights up a room, at all times. Bachman happens to be deluded in a different manner than the other characters. In many cases it's sheer bad luck that prevents him from obtaining greater wealth. On the show, those with great wealth also exhibit crazy behavior, and get away with it not because they are geniuses destined to change the world, but because they merely possess money.
Here is how Bachman himself laments his luck.
One of the best scenes that sums up insane lingo and nonsense that the tech industry is mired in is when Bachman approaches a more traditional, business minded CEO Jack Baker in hopes to influence the outcome of an investment. Jack’s radar is set off immediately with Bachman’s pretentious intro and rightly interjects:
I’ve heard all the engineering team’s complaints so before you waste time with some Freeform jazz Odyssey of masturbatory bullshit, just tell me what concrete information you have.
There is the operative word: concrete. As Satya Nadella recently said, we have self-hacked benchmarks that don’t really tell us much about AI and these claims of sentience, consciousness and super-intelligence.
Beyond Bad Metaphors
It is the fanfare, the salesmanship and slick visuals that blind us. The metaphors of “a new type of intelligence”, as though AI is a type of alien prescience we are about to encounter, does us a disservice.
There are tech industry veterans of AI who are very concerned.
Jaron Lanier is a pioneer in the field of virtual reality. His company VPL Research was the first company to sell VR goggles and wired sensory gloves in 1990. He held the position of visiting scholar at the Silicon Graphics, a company which focused on high performance graphics in computing. He currently is a research fellow at Microsoft, and has been vocal about the dangers of AI algorithms used in social media and the risk of yielding our agency to technology so readily. He is not a luddite by any stretch. Jaron does have an even handed view of the technological achievements of AI, but is a staunch proponent of the greater public gaining an understanding of the risks from AI. Those risks are not The Singularity or Skynet. Humans are the risk to other humans.
I share the belief of my cybernetic totalist colleagues that there will be huge and sudden changes in the near future brought about by technology. The difference is that I believe that whatever happens will be the responsibility of individual people who do specific things. I think that treating technology as if it were autonomous is the ultimate self-fulfilling prophecy. There is no difference between machine autonomy and the abdication of human responsibility.
https://reasonandmeaning.com/2014/04/02/jaron-lanier-on-transhumanism/
With regards to the current “behavior” of AI, Jaron is also clear:
There have been a number of very famous instances of chatbots getting really weird with people. But the form of explanation should be to say, “Actually, the bot was, at that point, parodying something from a soap opera, or from some fanfiction.” That’s what’s going on. And in my opinion, there should be an economy in the future where, if there’s really valuable output from an AI, the people whose contributions were particularly important should actually get paid.
https://unherd.com/2023/05/how-humanity-can-defeat-ai/
Many who make their living creating, selling and sharing their work on the internet are rightfully concerned that a huge company like OpenAI will just rifle through web pages or be fed digitized versions of their works and they will have trained ChatGPT for free. OpenAI will earn money from all the derivative work based on human endeavors. It’s important to remember that ChatGPT has been trained using humanity’s knowledge - it’s been fed all sorts of subject matter via text. The Large Language Model (LLM) that processes the questions that ChatGPT’s users submit uses a predictive model and pattern matching techniques to create the resulting output. There are surprising combinations that arise that give semblance to AI possessing creativity, but in the end this is a combinatorial process. LLMs are very good at spotting new patterns, and then applying their knowledge in concert with newly acquired patterns to construct text that aligns with the question it has been supplied.
But note what Jaron is saying. AI is the result of the collective efforts of humans. It should be viewed that way, which puts us in a different mind set than treating this emergent technology as an inevitable convergence of powers resulting in god-like capabilities.
And I think treating AI as this new, alien intelligence reduces our choices and has a way of paralysing us. An alternate take is to see it as a new form of social collaboration, where it’s just made of us. It’s a giant mash-up of human expression, opens up channels for addressing issues and makes us more sane and makes us more
Another pioneer in the field of Artificial Intelligence who has concerns regarding populist views of an emergent digital god is Michael Woolbridge. Woolbridge is the Ashall Professor of the Foundations of Artificial Intelligence in the Department of Computer Science at the University of Oxford, and a Senior Research Fellow at Hertford College. He has published over 400 research articles on AI and has published a book titled “The Road to Conscious Machines”.
I’ll let the title of one of his chapters relay his views on AI developing into a conscious super intelligence: The Singularity is Bullshit.
For Woolbridge, the responsibility for morals lies with us, not with embedding rules in AI. Not only is there the inevitable question of whose morals would be enshrined, but there is a severe risk that we could begin to perceive a faulty implemented moral foundation in AI as the final arbiter of morality. It’s automated, so what the machine decides must be the moral choice. And machines will not be held to account. Can you imagine that the price the AI has to pay for instructing people to drink bleach would be just being shut off? Woolbridge also goes against the grain and states that despite the very impressive performance of LLMs, they are, by nature, isolated from the real world and therefore cannot perform reasoning beyond pattern matching. He further states that our current crop of AI applications have been bolstered by the availability of data and processing, and this can overshadow the need to further evolve fundamental AI architecture.
His interview with Johnathan Bi is here on YouTube. It's well worth your time.
Welcome to the Team
Ok, we are going to do a little thought experiment, and congratulations, you’re on the data forensics team that the State Department has enlisted to scour social media posts and identify potential threats like Muhammad Khalil. Don’t worry if you’re not a techie, you’ll be paired up with the best. Here’s our challenge - we don’t get to use AI. More specifically, we are going to use old fashioned technology such as document indexing, pattern matching, search engine technology and maybe some boring SQL database technology like Postgres or SQL Server that has been around for a decade or so. Perhaps some old Google technology called Big Table from the good old days of Big Data.
Why are we not availing ourselves of the latest and best? Because we don’t need it. To counteract the free-form jazz Odyssey bullshit of Rlich Bachman we’re going to do things old school like Sean Conery did in the movie The Rock. We’re going to rely on our cunning and common sense.
I’m exaggerating, as you can tell. But I set out this thought experiment to demonstrate that we shouldn’t have to wait for AI for the State Department to do their jobs. Nor the DoJ. With existing proven tech we could accomplish a lot. And this is what competent people with good instinct should be doing already.
The Axios article stated that AI would be used to read social media accounts. That’s nothing new. In fact many of us fell prey to pattern matching back in 2015 and 2016 during the Presidential campaign for using hashtags and repeating campaign phrases without having humans report us. At the time people swore it was bots or AI, yet it was merely advanced pattern matching. It’s what Google did for years prior. You see, your tweets are just text. A machine reads them very easily, and categorizes much in the same way that Google or Bing does. I used search and indexing technology back in 2009-10 at my company when we were under electronic discovery orders due to lawsuits. The 2008 collapse put many of our tenants in financial distress, and after they lawyered up to file lawsuits to end their leases.
This is the same challenge that the State Department faces: find patterns by matching text. In our case, our relationships with our tenants were governed by leases ranging from 70-100 pages. Our negotiations with the tenants prior to their signing the lease were also under consideration, so that meant every single email, report, spreadsheet was to be reviewed by our legal department in order to meet the legal requirement of submitting all material relevant to the lawsuit.
The first lawsuit took us 6 weeks to track down all mobile devices, preserve copies and secure hold orders on emails, documents and our account systems. I enlisted my team of 3 dudes to whittle this timeframe down, and we proposed using our enterprise search system to index all items. We accomplished this in 3-4 days, depending on availability of executives from upper management and the C-level. We simply sat down with the Legal Department, reviewed their list of terms and turned them loose to validate the results once we were done.
The beauty was that they could perform the queries to find relevant documents and emails themselves. With Google style search. They didn’t need us other than to set things up.
This was in late 2009 and 2010.
This was not AI. But this is very similar to what the State Department has to do in order to find clues of potential illegal activity, or potential witnesses. Let’s think this through, as there are pain points, but these are not resolved with AI, but solved with access to underlying databases. But before starting this type of you attempt to narrow down the field considerably. Again this isn’t technology doing the thinking, it’s common sense.
First, the State Department, if they have done their job, should have profiled those who has entered the country and for what reason. You know the terminals at the airports where you go through customs? That is some form of database, or linked databases. But we’re not looking for millions of people, we are going to profile people based on their country of origin. Yeah, I know that’s not politically correct but that’s how I roll. Attributes are attributes. If we have a database we should be able to get a subset of people who entered the country within the past year and from the regional countries that we think will cast a big enough net. Once we eliminate candidates we can go back and expand our search.
Now if there are a range of let’s say 20,000 to 25,000 foreign students at Columbia, the amount of records that we have to compare is small. Let’s break it down further, because we are looking for only grad students. Ok, our State Department database may not have that element, but we can easily assume an age of 21 and older, then narrow who is potentially attending Columbia but matching names and age ranges.
So far we haven’t needed AI - we’re just narrowing the number of potential foreign students based on databases maintained by ICE and the universities themselves. Yes, there are complications if you don’t have direct access and have to import data, but these are easy things to accomplish. At worst you will get lists where you are matching by names and that can be further complicated by someone altering records at a university, but again we are not talking about 100,000s of names.
Do you see the picture that is emerging? You could generate a subset of government data quickly, get data from Columbia and start collating easily. No AI. You could do it very quickly in the cloud for just a few hundred dollars with Google’s cloud suite with Big Table as I said. So you wouldn’t need to purchase computers.
It’s not AI. It’s pattern matching and indexing. Old tech. And we have a rough idea of the number of people we need when we start the social media segment.
Reading social media posts is easy enough. The biggest challenge is determining who the owners of the accounts are. If someone is foolish enough to use their real identity on social media and engages in illegal activity, they would be found quickly. However you can still maintain a relatively anonymous account so there is guess work involved here. In the end, a human has to make a final decision regarding whether a social media account belongs to an activist. Social media posts can be indexed and searched - Twitter does this with technology called Lucene on its MySQL databases to present relative results. The system is built to locate content.
What about photos? Photos contain geolocation information if the user hasn’t disabled that feature on their phone. That is not a matter of AI to get to that information, it’s again indexing technology that collates photos to a region. Media files from accounts could be indexed and the geolocation could potentially put me at the scene of a protest. That information can be indexed and queried much like a database can after some preparation.
Again, this is not AI. We have had this data indexing technology for many years. The point is that these practices do not require such a narrow set of skills to perform these steps. And the greater point is that without AI you still could be working now, and with human intelligence through contacts, interviews, and yes, infiltration, you could build profiles and further refine the search to determine which foreign students should be of concern. You can determine who has interacted with whom without AI just by analyzing the tweets or posts. The social media platforms know who wrote the posts, the date and time, they know who has responded to them, and they know who has viewed them. The posts are what you structured data. This is all database and indexing type work.
So far we don’t need AI, we don’t need pre-crime predictive capabilities, we just need data collection and people to perform their duties.
So where could AI help? Certainly with sentiment analysis by reading social media. AI is great at understanding text and could help identify who has responded with violent language. And a form of AI distinct from the LLMs we have described could more effectively analyze photos, narrowing results for facial matching.
But in the end a human has to decide whether to take action. And my point here is that we have enough data that action can be taken without invoking AI. Show up at the protests, arrest people, fingerprint them, and get their identities.
Old school style.
Agency
There is a lot of foolishness to hoping AI is going to do our jobs for us. We have had such unrest with these demonstrations, so in a sense the local authorities should know who is involved. The protestors are there in masks, yet the moment they broke trespassing laws they could have been rounded up, and now under scrutiny as opposed to us hoping a Minority Report style data surveillance system can do the job for us. It is against the law to wear a mask at a protest. You don’t even need 20 year old database technology for the first steps.
In fact, in 2023 Gov Hocul announced that AI would be used to monitor social media accounts for a rise in antisemitism. Has that worked so far, or is it used for other means that haven’t been disclosed. It certainly didn’t lead to any activity that quelled protests and damage in 2024, did it?
As a technologist I am astounded that AI is looked upon as critical to resolving this situation, because it allows authorities to remain disengaged from where the activity is occurring. It also confers a sense of precision that allays our fears for the short term. “Good, they will use DOGE style tactics to find people who are being violent”. But why aren’t we asking the question of why can’t we use traditional law enforcement to determine the identity of those involved? These incidents are not flash mob occurrences but are longer term concentrated gatherings. You see them, show up. We see the law being broken, so arrest them. That will gain more than sifting through social media accounts.
But that is the danger with believing the free form jazz Odyssey of bullshit. We still have to remember that we live in the real world.
Very accurate perspectives. I think Jaron also nailed it from one of his other quotes:
“The danger isn’t that a new alien entity[AI] will speak through our technology and take over and destroy us. To me the danger is that we’ll use our technology to become mutually unintelligible or to become insane if you like, in a way that we aren’t acting with enough understanding and self-interest to survive”
Your essay brilliantly highlights the dangers of outsourcing our agency to technological solutions like AI. It's easy to get caught up in the hype, much like Erlich Bachman's overblown confidence in Aviato. 'Viva Aviato!' might be a fun catchphrase, but real solutions require human engagement, critical thinking, and a willingness to do the work, not just relying on the next shiny tech promise, although I have many examples where executives certainly give over their agency and admit to becoming 'lazy' and not checking the AI output - real examples even from the CEO of a bank!