Chris Hyams on responsible AI and leading with humanity

October 14, 2025

On Sept. 30, Dr. Lindsay Whorton interviewed Chris Hyams, former CEO of Indeed and advocate for responsible AI, for The Holdsworth Center’s Place the Ladder leadership speaker series. Below are excerpts edited for length and clarity.

Background and path to Indeed

At 20 years old, Chris Hyams was arrested on felony drug charges and faced up to 20 years in prison. He entered a pretrial intervention program, spent one night in jail, and got sober within a year. He’s been sober for over 37 years. That experience gave him clarity: “I got this gift of being able to live out the last several decades of my life the way that I did, and I have to do something meaningful with that.” 

He began his career working at an adolescent psychiatric hospital on the chemical dependency unit—the same kind of program that had helped him. He fell in love with the work. When his wife moved to rural Vermont, he followed, couldn’t find a job in recovery, and started substitute teaching. That turned into a full-time special education role. “I was like, I’m just going to teach forever.” 

Then life shifted again. His wife went to grad school in Los Angeles, and Chris gave himself two years to try becoming a rock star.  That dream did not pan out. They moved to Houston, where he took free computer science classes at Rice University. “In 1993, I thought, let’s try it. And it turned out it made sense to me.” 

He didn’t plan to go into tech leadership. “Managers are people who can’t code,” he once told his boss. But when faced with the alternative—working for someone else—he stepped up. Within a year, he went from coding full-time to managing a 300-person engineering team. “I didn’t want to do it, but I figured out that if I could make 10 people better at their jobs, we could get more stuff done.” 

That mindset—of using his experience to benefit others—carried him all the way to becoming CEO of Indeed. And even then, he said, “I never had a five-year plan. I just did the next right thing.” 

He recently stepped down from Indeed to pursue full-time work on responsible AI and is teaching a class on the subject at Huston-Tillotson University in Austin, where he sits on the board of directors.

Sign up for early access to future events

The dignity of work

I was taught at a very young age that work is a sacred thing. It’s a paycheck, but it’s where we find meaning and purpose in our lives. It’s a source of pride and dignity. My grandfather says you should treat every single job like it’s the most important job in the world. If you’re a NASA astronaut or if you’re sweeping the floor at NASA, it is the most important job in the world. 

The average American spends 90,000 hours of life at work. There was a really amazing study that was done a few years ago that says that your direct manager has the same impact as your spouse and more than your therapist on your mental health. It makes a big difference to have someone that cares for you and is looking out for you.

Leading with humanity at the core

Everyone says that whatever business they’re in, it’s a people business. Sadly, what I saw in the tech world in 1996 is there were not a whole lot of people in positions of leadership that also had a deep core of humanity as the thing that was most important in their life. But it had become, for me, the center of it.   

(The Holdsworth Center) exists because in the education world, we don’t have things like this. Most places don’t invest in helping people figure out how to care for people and to help them. I believe that if people are happy – the term that we use is psychologically safe – that they do great work. If they feel connected to the mission and purpose and the people they’re helping, they do great work.  

There’s a whole bunch of people who believe that’s a load of crap and that you should be looking over your shoulder at all times and that people get more stuff done when they’re afraid.  Unfortunately, a lot of people work in an environment like that.

Responsible AI is the single civil rights and human rights issue of my lifetime.

Chris Hyams Former CEO of Indeed

The promise and challenges of AI

Responsible AI is the single civil rights and human rights issue of my lifetime. I realized if it’s really that big and I know something about it, that’s what I should be spending all my time and energy on. I was able to do that at Indeed at a pretty big scale for a long time, but at some point it became clear there was so much else – housing, education, healthcare, the criminal justice system – all these areas where this stuff is going to be playing out.

If you’re not losing sleep over it, you’re underestimating the impact – on the environment, on jobs, on worker exploitation, creative theft. It is negatively impacting lots of things. We need to urgently work on this stuff now.

But there’s really incredible stuff that’s coming out of it too. The analogy I keep using is the internet. If you think back 30 years, everything has changed between now and then. We’re still living and going to school and teaching and doing the same things, but we do everything either a little bit or a lot differently. The thing with AI is that we’re going to see 30 years of change crammed into three or four years, and that’s too fast. So that’s actually my biggest concern is just that we’re not ready for it.

This is one of the most powerful creations ever.  It is going to be extraordinary in terms of the opportunities for education and learning and making certain types of things that used to be incredibly difficult so much easier.  An individual running a business can do something that used to take 30 or 40 people. 

And I think if people don’t grasp the fact that it also has its challenges, it’s going to be a really difficult period of time.

I’m hopeful this is a big enough issue that people will feel the urgency to get involved. If people mobilize I think we can get a lot more of the benefits and a lot fewer of the downsides.

AI’s impact on the workforce

If you look at that 30 years of change because of the internet, a handful of jobs went away but way more jobs were created. If you worked in the answering machine industry in 1990, you lost your job. But if you look at all the people today employed making smartphones, selling cell phone plans, and then developing every single app and service in the world, it dwarfs the people (who lost their jobs). So, I believe in the long arc of this.

What we should be teaching kids

I’m a huge believer in the core tenets of a liberal arts education. The most important thing I got was critical thinking, the ability to look at something and ask, “Why is it the way it is?” And then to figure out how to get my own answers, and to be adaptable. Most people do not end up working in a field that’s directly related to what they studied as an undergraduate. My fear is that we’ve become hyper-specialized. Kids who are good at STEM are told you don’t need to read that Toni Morrison novel or learn that history. Likewise, kids who say, “I hate math” are told that’s okay, you don’t need to worry about that stuff. I think we need to worry about all of it.

Teaching is training for leadership

Working as a special education teacher is probably the thing that was most helpful in managing engineers. A group of 10 engineers are all going to be very, very different. Coming in with the expectation that everyone is not like me and they don’t learn like me and they don’t think like me and they have other things that keep them up at night – I got more out of that in my two years of teaching than anywhere else.

AI reading list

Want to dive deeper into AI? Chris Hyams’ offered a recommended reading list below.  

  • Dr. Fei Fei Li is CalTech PhD, professor at Stanford, and considered the “Godmother of computer vision.” Her memoir “The Worlds I See” offers an inside view into the recent history of neural networks, the technology underlying today’s generative AI Large Language Models. It is also a deeply personal story of a young immigrant to the US: The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI 
  • Stephen Wolfram is a Physicist, Mathematician, Computer Scientist, entrepreneur, and prolific writer and speaker who has dedicated the past 40+ years of his life to understanding and helping others to understand nature and intelligence. He is the creator of Mathematica, and also Wolfram Alpha, the knowledge engine that powers Siri. Anything of his is worth reading, but for those who want to understand how LLMs like ChatGPT actually work, this article is exceptional. It’s also available as a book on Amazon, but the entire text is free on his website: What Is ChatGPT Doing … and Why Does It Work? 
  • Anthropic is one of the major “foundation model” AI companies that compete directly with OpenAI, Google, and x.ai. The founders were early researchers at OpenAI, and they left because they believed that OpenAI’s commitment to responsible AI was taking a backseat to profits. They are committed to open research on AI safety,  and regularly publish reports that are unflattering to themselves and others in the name of transparency. While all of these companies have problems, Anthropic are the most dedicated to responsibility. This is the latest big research they published on AI safety. It is quite chilling and worth reading: Agentic Misalignment: How LLMs could be insider threats. All of their research is a great resource: Research at Anthropic