AI in the Valley
Silicon Valley, on one hand, is the birthplace of incredible innovation. On the other hand, it might be teetering on the precipice of Skynet becoming sentient. Crawling out of lo-fi hacker holes and escaping from gleaming high-def tech farms, like a resurrected terminator, is Artificial Intelligence.
“The world is going through a transformation,” said Peter Leyden, entrepreneur and futuristic tech expert in a 2022 conference at San Francisco’s Moscone Center. “Something that will be remembered for a long, long time to come; 50 years, 100 years, centuries ultimately.” Leyden goes on to explain that especially over the next 30 years, humanity will see fundamental system changes and new technologies, like AI, emerging.
Silicon Valley is home to 1,400 AI startups worth more than $40 billion collectively, according to Crunchbase. And it doesn’t look like it’s going to slow down any time soon.
It’s a heady time in the Valley, with angel investors circling any startup that even whispers AI. From a revolutionary new way to produce apps, music, nuclear bomb triggers, drugs (for recreation and medical conditions), screenplays, animated dreams and travel agents, AI seems to hold the key to our future. But there’s a lot of social anxiety around AI and it’s worth pausing to wonder if, in this moment, happening in real time, we are about to open Pandora’s Box.
“AI is like a superpower,” said Leyden. “It will essentially be able to take what humans can do and supercharge it. An example of this is the coming of perfect simultaneous language translation.
“Robots are going to be doing a lot of stuff and we’re going to welcome them, we’re not going to be scared of them,” continued Leyden. “They’re going to save our asses, frankly, rather than take our jobs.”
Institutions like the Silicon Valley Artificial Intelligence Research Institute (SVAIRI) have been instantiated over the past decade to help regulate companies in the AI realm. “We focus on how can automation create jobs and improve the human experience,” writes svairi.com.
The lifeblood of the internet is content, and AI can produce unlimited essays, every manner of art, deep fakes, software code and is quickly approaching the passing grade of the Turing Test. It’s no wonder part of the writer’s strike in Hollywood is concerned with being replaced by AI. If AI can knock out a 30-minute rom-com in seconds, what can a group of 20-somethings in a writers room contribute?
A Sit-down Q & A with Bay Area-Local Former College Professor and Futurist, Chris Hables Gray
Chris Hables Gray, Ph.D., has lived in the Bay Area for almost 50 years, and up until a year ago, was a Continuing Lecturer at Crown College in UC Santa Cruz. Now, he travels the world lecturing about AI and machine learning. Gray believes that until humans learn to make good choices, machines don’t stand a chance. In other words, AI is not the problem (similar to guns), it’s the humans that use it that are causing the world-ending problems. What would happen if we turned our technology towards a more just society instead of destabilizing economies? What does a just future even look like? Gray is a vagabond, a futurist and somebody who has very strong opinions on where this country is heading. Revolution.
What do you make of the Google engineer, Blake Lemoine, who claimed the chatbot he was working on was sentient?
The claim by the philosopher Benjamin Bratton and a computer scientist that these systems might be intelligent in a new way that we haven’t seen before is remarkably uninformed. As the work of Peter Godfrey-Smith on octopus and other creature intelligence shows, good thinking about the origins of consciousness is exactly about such limited types of information processing. There is a great deal of research by scientists and theories from informed philosophers in this area. Simple animal cognitive systems are sometimes quite similar to what these chatbots do (one discrete calculation/action after another), except they involve physically interacting with the real world in increasingly complex ways, not by exchanging texts with people who desperately want to believe. See Godfrey-Smith’s Metazoa: Animal Life and the Birth of the Mind, for example, which I am reading now.
Does AI want my job?
No. AI doesn’t want anything. The current wave of AI panic is being driven by what some call Stupid AI, what I call Viral AI, and what everyone calls Machine Learning. These programs are superficially competent but there is no depth to them. They have no way of thinking, actually, they repeat patterns. Because of this they are incapable of generating anything truly new, as in evaluated by some complex judgements to be justifiably new. For years there have been programs that could report the stats on baseball games and even make it sound like a real reporter, but they couldn’t, and still can’t, tell you about the real dynamics of the game, let alone the players. The chat programs seem to know a lot, but what they “know” is just output from their massive data sets which is mainly the internet. They fail miserably even in tasks you’d think would be easy to logically organize, like business reports. Eventually, they will get that down, but for many years, if not always, these programs will not be able to differentiate good info from bad. Most people can’t, after all. Garbage in; garbage out. People will die because they take their medical advice. How do we know? People already die because they take internet medical advice. This just repackages it.
Why is the zeitgeist so focused on AI being alive?
People are often convinced their cars are sentient, and I mean old cars like I can fix. If it quacks like a duck it isn’t necessarily a duck. It needs to walk like a duck, eat like a duck, shit like a duck and fuck like a duck (baby ducks) and die like a duck. There was a robot duck made hundreds of years ago that actually could do most of those to some degree. Not a duck. This is one of many fun stories in my book Cyborg Citizen.
Are you worried?
I am not worried one little bit about these pure digital systems becoming conscious. The rapture of the nerds, the Singularity, is based on the stupidest reasoning possible and the current wave of chatty bots is even less likely to rule over us then the mechanical duck that shat all over the King of France’s palace. How could they? We don’t know a great deal about consciousness but we do know it has to be embodied. It involves continual feedback sensing with the outside world and the inside world of the body. But these systems don’t even pretend to do this, they just chat.
So, AI isn’t dangerous?
They are profoundly dangerous. Their illusions of competence have already led to AI programs being given decision making powers over criminal sentencing, police responses to domestic violence (I’m researching this with colleagues in Spain) and other robo-processes.
How does this play out?
What it only alludes to in passing is one of the bigger dangers of this machine-learning AI, is that people will want to give over command and control of real weapons to it, including nuclear. There is some evidence that some drones have been programmed (perhaps with a tank profile) and set out to kill autonomously. I don’t know for sure but it is inevitable. Of course, when this happens many big plywood tanks will be blown up. But most AI in battle is now about sorting information and the results are mixed and easily confounded by the enemy knowing what you are looking for and spoofing it. But to give over to these systems real decision making power is a massive mistake, as it was back in the 1980s when President Ronald Reagan wanted to put Star Wars under AI command. Computer scientists mobilized against this and, no doubt, we’ll mobilize again. My first book, Postmodern War, has a whole history about this.
What about the people that are allowing this to happen?
Bureaucrats are cowards and collectively they are stupid. The desire to dodge responsibility and give it over to systems is hard for them to resist.
Are we living in an age of disinformation?
Every new wave of communication tech produces dangers and opportunities. Some are worse than others, in that their affordances (what they tend to make possible) lean one way or another. Graeber’s history of money, Debt: The first 5000 years, shows how money was invented by centralized authority to create debt and enslave people. Money is a form of communication tech, after all. And writing started out to keep track of everyone’s debts for the 1% but people found other uses for it. More recently massive print tech helped drive the enlightenment branded collapse of the aristocracy in waves of revolution. Spread information all over the place. Including hate driven irrational conspiracies. Radio helped farmers learn better practices, was integral to Hitler’s rise, and also made FDR seem like a nice guy in your living room. TV. Well, you’ve seen TV. Now we have the interweb. At first it was so out-of-control in a good way the US military washed their hands of it and it spread revolutions across Eastern Europe and the Arab world. Then the right found ways to use it and now everyone acts like it is the end of the world. As if hateful lies aren’t always spread through society. The problem is FB and Google and such make bank on spreading the worst fearful shit.
What’s the light at the end of the tunnel? Is there one?
So again, it comes down to politics/society. If we keep this system where these increasingly powerful techs are mainly mobilized to make a few very annoying assholes richer than God was ever thought to be – I’m looking at you Musk/Bezos/Zuckerberg – then, it is game over. If the tech is mainly focused on war and profits, then we will see not just more lies but climate collapse. The big carbon companies are polluting (and profiting) more than ever. China is planning two new coal plants a day (a week) because the current regime is gone and the country wants to keep getting richer in the short term, while destroying the planet (at least in terms of human habitation) in the long run.
What’s the bottom line on this?
So, bottom line, exponential increases in science and tech turn old political and social problems into the incredible civilization-ending near future that is looming right ahead. Don’t blame the stupid AIs, blame the stupid humans.
How do you put one step in front of the other when you get out of bed in the morning?
I’ve been an anarchist-feminist revolutionary for 50 years. It is my avocation. I’ve been arrested and beaten up dozens of times but what gives me hope is that I have helped catalyze massive successful protests and I am part of a wonderful ongoing movement (or movement of movements) that is around the world. In my life I have seen the second greatest empire in the history of the world collapse, although Putin is trying to put it together again, the stupid evil humpty-dumpty that he is.
Is revolution inevitable in America?
There have been revolutions in many countries that I have visited: Portugal, Czech Republic, Egypt, and so on. Often nonviolent, as in only the good guys get killed. When I first went to Spain, Franco was alive and it was a crime to have three people speak Catalan on the street together. Now the Catalans are half-way to independence. Culturally we have made tremendous strides as well. Dope is legal! As a kid, I listened on my little Japanese Sony radio hidden in my bed to black people and their white allies being murdered in the South for trying to vote. Now most people think you should at least pretend not to be racist! Gays are out in the streets and even can get married! Most people know the climate is being destroyed, but just can’t get up the gumption to overthrow the people profiting from it. We’ll have to eventually, just to survive.
So, you’re hopeful?
When I was 19 Nixon won the election with 2/3 of the vote. Trump couldn’t even get half. And if you look at specific issues, most people in America are as liberal as the Europeans who are so civilized. Not perfect, of course, but reasonable about health care and the environment and so much more. In the US, being an evil empire makes things harder – the military-industrial complex is very much in charge as good ol’ Ike warned us – and the weird cultural right win, declining but vicious and desperate in its decline, is still strong.
Do you see technology as being a source of our undoing?
If tech wasn’t driving problems so much faster than change happened in the past, humanity would have a good shot at growing up and changing the world enough to not just survive but thrive. As it is, I predict it will be a close call. We just have to speed up our maturity.
How can people buy your books?
The best way for people to buy my books is get the two that are free on the web – as are many of my articles – and buy the others used. I think all but the latest are in the UCSC. Sadly, I get no money from any of that. If you have a bit of disposable income and want a nice clean copy one can go to the publisher’s website, Routledge, and buy straight from them. I’ll get a dollar or so.
Government-funded Robotics Education & Research Labs in the Bay Area
Cabrillo College: Dept of Engineering
Carnegie Mellon University Silicon Valley
City College of SF
De Anza College: Dept of Engineering
Electrical Engineering & Computer Sciences – Cory Hall
Foothill College: Dept of Engineering
Lawrence Livermore National Laboratory
Monterey Bay Aquarium Research Institute
NASA Ames Research Center
Naval Postgraduate School: Controls & Robotics Lab, Spacecraft Robotics Lab
Open Source Robotics Foundation
Saint Mary’s College of California
Sandia National Laboratories
San Jose State: Charles W. Davidson School of Engineering – Aerospace Engineering
Santa Clara University Robotics Systems Lab
SF State School of Engineering
SRI International: Headquarters
Stanford University: Aerospace Robotics Lab, BDML, BioDesign Lab, Biomechatronics Lab, CHARM Lab, Lentink Lab, Multi Robot Systems Lab, Robotics Exploration Lab, SAIL, SHAPE Lab, VAIL and CARS
UC Berkeley: AI Lab, Automation Lab, Center for Automation and Learning for Medical Robots, Citrus & Banatao Institute, High Performance Robotics lab, Human-Assistive Robotic Technologies Lab, Robotic Learning Lab
UC Davis: Robotics, Center for Human/Robotics/Vehicle Integration and Performance, Dynamics, Controls, Vehicles and Robotics Dept., Human Systems Engineering Dept, Integration Engineering Lab
UC Merced: Robotics, Computer Graphics Lab
UCSC Jack Baskin School of Engineering, Computer Science and Engineering Dept
UCSF Technology Services, School of Medicine
University of the Pacific
Commercial Robotics Education & Research Labs in the Bay Area
Google X Lab
HAX San Francisco
Omron Adept Technologies Inc.
Robert Bosch LLC
Toyota Research Institute