What does it look like when teams let AI take the lead? In this episode, James Cham - founding partner at Bloomberg Beta and expert on how technology reshapes work – joins Rebecca Hinds, Head of the Work AI Institute at Glean, and Bob Sutton, Stanford Professor Emeritus, to explore the outlier teams pioneering AI-first workflows, the end of traditional SaaS, and the profound risks and rewards of AI in the workplace. Get a front-row seat to the future of work, from managing fleets of bots to navigating trust in an AI-powered world.

What you’ll learn

  • How radical teams are leveraging LLMs to accelerate innovation 
  • The new dynamics of collaboration between humans and AI agents 
  • Why the next software business model may look nothing like SaaS 
  • The biggest risks as organizations become more AI-dependent

Subscribe for more

https://www.youtube.com/@gleanwork

Follow our hosts and guests

Follow Glean

Timestamps

  • 00:00 – The search for AI-first teams and new ways of working
  • 01:10 – How LLMs are changing team structure and management
  • 09:43 – Rethinking software business models in the AI era
  • 13:04 – Trust, risk, and the potential downsides of AI
  • 20:05 – What’s likely to stay the same in the future of work

See the latest from the Work AI Institute at Glean

Rebecca Hinds (00:00)

Welcome to Intelligence Real and Imagined from the Work AI Institute at Glean.

Rebecca Hinds (00:05)

This is the show where we sort through what's real, what type, and what actually works with AI at work.

Rebecca Hinds (00:12)

In this episode, we're digging into how AI is starting to put real pressure on organizational design, how teams are restructuring around AI, how decision-making and ownership shift when work gets faster, and what new business models might replace the old SaaS playbook.

Rebecca Hinds (00:33)

Our guest today is James Cham, founding partner of Bloomberg Beta, Bloomberg's venture fund.

Rebecca Hinds (00:39)

He's spent his career backing and studying how new technologies reshape teams, products, and business models.

Rebecca Hinds (00:48)

And he has a front row seat to how AI is changing what gets built, who builds it, and how work gets organized.

Rebecca Hinds (00:57)

As always, I'm also joined by my co-host, Bob Sutton, Professor Emeritus at Stanford and one of the founding members of the Work AI Institute.

Rebecca Hinds (01:06)

I'm really excited for this conversation, so let's go ahead and dive in.

Bob Sutton (01:14)

The question of what an organization is supposed to look like.

Bob Sutton (01:16)

I'm actually, I don't think I've ever been confused about it, but I think I'm confused.

James Cham (01:21)

I am genuine, by the way.

James Cham (01:22)

Of course, this is only happening at the, at the tail, tail, tail.

James Cham (01:27)

They're only.

James Cham (01:28)

I spend all day looking for these kinds of teams, and I found three of them.

James Cham (01:35)

I literally spent all day, I'm like, have you heard of someone doing this?

James Cham (01:38)

And then you only see a few.

James Cham (01:39)

So this is not normal practice.

James Cham (01:42)

Yet.

Rebecca Hinds (01:42)

Interesting.

Rebecca Hinds (01:44)

Does that team activity factor more into your decision-making to invest now than it did before?

James Cham (01:52)

There's almost a way in which I don't really care what you're doing.

James Cham (01:55)

I kind of care.

James Cham (01:56)

If you figured out the future, then I'll work with you.

James Cham (02:00)

We'll figure out some business.

James Cham (02:01)

But if you figure out how to.

James Cham (02:03)

that aggressive way of working, that's what I spend all my time looking for.

Bob Sutton (02:07)

Okay, all right.

Bob Sutton (02:08)

Well, you really are looking for outliers.

Rebecca Hinds (02:10)

And as we think about enterprise AI in particular, if you're to envision, say one year, two year, what's the best possible outcome of AI impacting our organizations?

James Cham (02:23)

So now I'm seeing this only in a few small places right now, but I do feel like there are small teams right now that have genuinely figured out how to use the LLMs for what they're best at.

James Cham (02:36)

I think that's not widespread yet. Even now, I spent all day looking for this.

James Cham (02:43)

I spent all day looking for people who are using LLMs in a genuinely aggressive way, where they've pulled people out of the process as much as possible.

James Cham (02:52)

And they figured out how to use the LLMs in ways that the LLMs are good at because they're bad at lots of things.

James Cham (02:58)

All the normal things that people complain about, those are totally true.

James Cham (03:02)

But at the same time, they are getting cheaper, they are getting more effective along certain dimensions.

James Cham (03:08)

And then the people who see the future and are able to adapt the way that they work to what the LLMs are good at.

James Cham (03:16)

Right.

James Cham (03:16)

I feel like that's going to be very disconcerting and very, very, very weird.

Bob Sutton (03:23)

So these are, I assume, small teamsthat have done this from birth.

James Cham (03:29)

Yes, or so there's a company that I invested in where the team, the CTO, co-founder has 100 engineers.

James Cham (03:39)

He said we can't actually and he tried to implement it, and he did the same thing everyone else did.

James Cham (03:44)

You get these marginal improvements.

James Cham (03:45)

So, so then he literally pulled four people out of the organization.

James Cham (03:48)

He stuck them in a different co-working space.

James Cham (03:50)

They have a cave, they have manifestos, and it's almost like they've got to buy into a religion, a fundamentally different way of working, in order to see any of the results.

James Cham (04:02)

And I think the way that they're working will be widespread one to 20 years from now.

James Cham (04:09)

Right.

Bob Sutton (04:10)

So that's the classic way that a radical innovation starts in a existing organization is you pull them off.

James Cham (04:15)

That's right.

Bob Sutton (04:16)

And pretend they're a brand new organization.

James Cham (04:18)

That's right.

James Cham (04:19)

And then like, you know, this guy Justin McCarthy, to his credit, like, he's like, look, you know, sort of, you've got to be all in with me on this way of thinking.

James Cham (04:27)

It may or may not work, but you've got to just be all in and jump in.

James Cham (04:30)

And so, and to be honest, he's like, I feel like Tim, there's like a couple small teams in Anthropic.

James Cham (04:37)

There are a couple small teams at Microsoft where they've genuinely adopted that view.

James Cham (04:42)

And then suddenly they're managing these workflows and it's as if they're managing 8, 10, 50 employees.

James Cham (04:49)

And so it's exhausting because it's not like managing the way a normal manager manages with say occasionally I do a one on one, but instead these things are constantly coming back to them and saying, hey, this works now or would you like to give me feedback on this?

James Cham (05:02)

And so that shift, I feel like we're just tasting the beginning of.

James Cham (05:07)

And I've not seen anyone scale it past five to eight, convulsed.

Bob Sutton (05:11)

Okay, so the word workflows in that situation, it's taken me a while as a regular organizational theorist, because I think of a human managing other humans, but you're literally managing a bunch of AI bots.

James Cham (05:25)

That's it.

Bob Sutton (05:27)

There's no humans.

James Cham (05:28)

Well, and then the humans who work with you on the side, you guys collaborate, but then the way you collaborate is unclear.

James Cham (05:33)

And by the way, I've only seen this work in software development too, right?

James Cham (05:36)

So I've not seen this everywhere else, but it feels like, I mean, there's a way in which software.

James Cham (05:41)

The way a friend of mine puts it is like code goes first, right?

James Cham (05:44)

Like, software development is clear enough and has enough nice properties that you can figure out for software development, then you can take those lessons and apply them everywhere else.

Bob Sutton (05:54)

So, I mean, the inversion of that is when Uber drivers are managed by an algorithm, basically.

James Cham (06:02)

Sure.

Bob Sutton (06:03)

And then I guess when we have Waymo, it's an algorithm, you know, overseeing robots, literally.

James Cham (06:10)

Right.

James Cham (06:10)

So what's interesting about it, though, like, if you think about the Uber case as it was designed 10 years ago, 

James Cham (06:15)

Versus Uber case as it is now, I bet you that a modern Uber manager would do a better job than I would.

James Cham (06:21)

Right

James Cham (06:21)

Because a modern LLM Uber manager might actually be like, oh, this person's kind of tired.

James Cham (06:27)

I will write a note and say something because, like, the.

James Cham (06:30)

I mean, I think one of the durable truths about the models right now is that they have higher EQ than at least a 75th percentile guy.

James Cham (06:39)

Now, men are admittedly quite clueless compared to the overall population, but definitely it's one of those cases where its ability to read into situations and capture nuances is actually quite good and surprising.

Bob Sutton (06:53)

So one of the people that we've interviewed and known for a long time, Lindsey Cameron, she's a Wharton professor.

Bob Sutton (06:59)

And for her dissertation, one of the things she did was she was an Uber driver and she studies basically having an algorithm for a bus.

Bob Sutton (07:05)

And essentially Uber drivers often like having the algorithm because to your point, it's more reliable, it's more fair.

Bob Sutton (07:12)

And the other thing is that there's ways to game all bosses, and the ways to game the algorithm are really consistent.

Bob Sutton (07:19)

Like, it doesn't vary from.

Bob Sutton (07:20)

Because you always have the same algorithm, it's like you always have the same boss.

Bob Sutton (07:23)

So there are advantages.

James Cham (07:24)

Yeah, yeah, yeah.

James Cham (07:25)

I mean, yeah, I could totally see that.

James Cham (07:27)

Right.

Rebecca Hinds (07:28)

And when you talk, James, about another religion, as people are buying into this new way of working, is it reflexive use of AI?

Rebecca Hinds (07:37)

Is there something beyond that that they're buying into?

James Cham (07:39)

Okay, so now in the case of this.

James Cham (07:43)

So one of the teams, I think, thought pretty hard about trying to codify what they do and they had a couple of ideas, especially around not handwriting any code and not reading any code.

James Cham (07:54)

And so that idea for software developer is super distasteful.

James Cham (07:58)

It seems ridiculous, right?

James Cham (07:59)

My whole life I'm a better software developer because I actually could look at this thing and understand it faster than a normal person or understand and structure quicker.

James Cham (08:08)

And then the idea that you would take that out of your core process was just very hard, right?

James Cham (08:13)

And like they said, look, we're going to go all in on this, right?

James Cham (08:17)

We're also going to go all in on like a lot of scaffolding around testing and then we're going to go all in on like looking for workflows or loops where a person might manage and design it, but is not a blocker at any one of the steps.

James Cham (08:31)

And the moment you pull that off, right, then suddenly the thing goes much faster.

James Cham (08:34)

Because there's a way in which, I don't know, there's like a dumb, I should call it dumb, but there's a way that people talk about ROI right now that seems like the same thing I would have done if I were measuring IT projects in 2005.

James Cham (08:49)

And that seems obviously wrong in part because right now we're on the edge of discovering and figuring out things.

James Cham (08:55)

And so you're looking for these higher payoff experiments so that you can then describe to everyone else, oh, this thing works well, you should all work this way.

James Cham (09:03)

And that's like a different thing than, oh, we'll save 20 cents and we replace the call center this way or that way.

Bob Sutton (09:09)

So this is why it has to either be done in a skunk works or it has to be done in a startup.

Bob Sutton (09:15)

And so that's sort of.

Bob Sutton (09:19)

Do you have any more to add about this?

Bob Sutton (09:21)

Because it's such a weird outlier.

Bob Sutton (09:23)

Either it's the future or it's going to flame.

Bob Sutton (09:25)

Right?

James Cham (09:25)

That's right.

James Cham (09:26)

That's right.

James Cham (09:26)

No, no, I know.

James Cham (09:28)

I, I say this to these guys all the time.

James Cham (09:30)

This could be totally wrong or this could be clearly the future.

James Cham (09:36)

The other things that they do is the nature of their collaboration between people is different because it turns out when they are working so fast with the LLMs, there's almost a way in which the normal way of the two of us are looking over some code repo or we're talking no longer makes sense because this piece, your little loop with the agents, is so fast that you don't really figure out how to work with them.

James Cham (09:58)

So the other thing that they do a lot of is they do a lot of having the agents learn lessons.

James Cham (10:06)

They do a lot of send the LLM off to look at some other open source code base or read some documents and then come back and say, here are the lessons that are interesting for the organization or for the way that we run and here are some things that we might actually even incorporate into our workflows.

James Cham (10:21)

I think that changes the nature of dependencies.

James Cham (10:26)

Suddenly I don't.

James Cham (10:27)

I can't.

James Cham (10:28)

Like, that's why I want to talk to the two of you, because I feel like the moment that that happens, then the old normal assumptions around organizations change.

James Cham (10:37)

Right.

James Cham (10:38)

And I don't really know what it means, but it feels like if it works, it means something profound.

Rebecca Hinds (10:44)

Let's talk about software as a service.

James Cham (10:46)

Yes.

Rebecca Hinds (10:47)

Dominated the last 20 years in terms of software and investment.

Rebecca Hinds (10:52)

As we think about AI, what do you think the dominant model is going to be?

James Cham (10:57)

Oh, I think it's a really open question.

James Cham (10:59)

I think it's important to remember that software as a service is not just a valuable relationship between a company and its customers, but also in some ways software as a service was a terrific financing hack that actually a bunch of very smart VCs realized that the risk of a company going for 1 to 10 to $50 million is less than what the world thought.

James Cham (11:22)

That actually a company might look like it's losing money, but actually because they'd established a whole set of patterns with their set of customers, that actually it was much less risk.

James Cham (11:30)

And so my old boss, David Cowan, I think was the first one to articulate that to me.

James Cham (11:35)

And I think in general the first one to really get that software as a service is not just a commercial thing, but it also is a different way of underwriting a business model.

James Cham (11:44)

And then I think that old pattern maybe near its end, that there's a way in which in part because customers get more demanding, but also my ability to copy, or I don't want to say copy, but my ability to adopt software and create software on my own might change that.

James Cham (12:04)

Suddenly we're in a very different, uncomfortable world for software companies that that question.

James Cham (12:13)

I'll tell you what people want.

James Cham (12:14)

The dream always is that you figure out some way to align yourself with the incentives of your customer and then you figure out some way to say, oh, I will share in some outcome.

James Cham (12:27)

It's still really hard, right?

James Cham (12:28)

It's still really, really hard.

James Cham (12:30)

And it's unclear to me whether it's even going to be doable in a consistent way.

James Cham (12:34)

Because of course, sort of I'm like, I made you $50 million.

James Cham (12:39)

You're like, well, you didn't really make me 50 million, $50 million.

James Cham (12:41)

You made me $30 million.

James Cham (12:42)

And consider the work I did on that whole negotiation feels really hard.

James Cham (12:47)

And so the fear is that we end up with something that looks more like utility computing.

James Cham (12:52)

The problem with utility is so good, but the problem with it is where I make money for some amount of work.

James Cham (12:58)

The problem with it is that our incentives are not aligned, that I'm constantly just trying to get you to use my compute more rather than get some real outcome that's helpful for you.

James Cham (13:08)

And so those are probably the two pull.

James Cham (13:13)

I think those are the two things people consider.

James Cham (13:15)

There's a third one, which I think is a different business entirely, in which you say, oh, the software that I've created is so valuable that rather than selling it to other people, I will just run a business based on it.

Bob Sutton (13:27)

Right, right, right, right, right, right.

Bob Sutton (13:30)

And that's SAP and Oracle, isn't it?

James Cham (13:31)

It feels like that's still out there as an option, so.

James Cham (13:33)

But we're not.

James Cham (13:34)

But I. I don't know.

Rebecca Hinds (13:35)

And so as you're advising startups, what do you tell them in terms of how to think about pricing?

James Cham (13:40)

I think you always tell them they're not charging enough because that's almost always true.

James Cham (13:45)

Right.

James Cham (13:46)

And then there's a little bit of, you know, sort of the world is shifting so much that you're just paying attention to other people in your market.

James Cham (13:55)

As far as, like, what are the norms for a specific market that's right now, I think sort of where we are, because this is a little bit of timing thing where even if some sort of utility computing thing made the most sense, if you jumped too far ahead of everyone else, then you create these problems.

James Cham (14:13)

And so it's very market-specific, at least right now.

Rebecca Hinds (14:18)

What's the worst possible scenario of AI?

James Cham (14:21)

Oh, there's so many bad things that can happen.

James Cham (14:22)

Right.

James Cham (14:23)

Like sort of.

James Cham (14:25)

And it's everything from the worst.

James Cham (14:29)

It's everything from the mundane.

James Cham (14:31)

We just live in a world of infinite slop and we end up in a place where, I'm afraid, you text me and I'm like, is it really Rebecca?

James Cham (14:41)

Or is it like, I can't really trust this?

James Cham (14:43)

So we go from a relatively high trust environment to a very low trust environment, and that could be just around the corner.

Bob Sutton (14:50)

I think it's already here Actually, yeah.

James Cham (14:53)

Yeah.

James Cham (14:53)

Well, you feel that, right?

James Cham (14:54)

You feel like that first time where you're like, oh, this is not the person who I thought it was.

James Cham (14:58)

And then that becomes.

James Cham (15:01)

It'll be phone calls and it'll be video, all that.

James Cham (15:04)

And the moment, of course, that we go from a high-trust environment to a low-trust environment, then everything slows down.

James Cham (15:11)

Another terrible version is if we end up, and I don't think this will happen, but if we end up with just one dominant provider of LLMs, and then they integrate everything and they have no incentive to get better, and then suddenly we're all under the rule of whatever and that's a bad outcome.

James Cham (15:30)

I think that's less likely now, in part just because everyone is thinking about this.

James Cham (15:34)

There's a nice thing where the terrifying thing as an investor, of course, is the fact that everyone cares means that things become quite expensive and your bubble words or things like that people worry about.

James Cham (15:46)

But the nice thing is when everyone is focused on a problem, you know that people will find solutions, right?

James Cham (15:52)

And then when they find the solutions and tell other people about the solutions, then we all benefit.

Bob Sutton (15:56)

So I wonder, just in hearings, thinking of your earlier examples and playing out the example where you've got these actually quite smart engineers who have.

Bob Sutton (16:05)

They're kind of running fleets of LLMs, it seems to me that they're at the point where they're actually managing things they can't understand.

Bob Sutton (16:12)

So do you worry about us getting to the point where we have what used to be a social system, which, by the way, was too complicated to understand as it was if you're running Walmart or pick your large company or Facebook meta, you actually can't understand what you're running.

Bob Sutton (16:28)

But you were describing another level of incomprehensibility.

James Cham (16:32)

Right?

James Cham (16:33)

Okay, so I think that you don't need everyone to understand.

James Cham (16:38)

But as long as we have both an information ecosystem and an educational ecosystem where we encourage like some percentage of people to actually care about how things actually work, right.

James Cham (16:49)

Then I feel like we're okay.

James Cham (16:50)

Right.

James Cham (16:50)

You know, sort of as long as, like.

Bob Sutton (16:52)

Well, I'm not just talking about caring.

Bob Sutton (16:53)

I'm talking about no matter how much you care that you have this thing that actually is impossible.

Bob Sutton (16:57)

So you don't understand how it works, you don't understand what it's doing, you don't understand how to fix it.

James Cham (17:01)

Right.

Bob Sutton (17:01)

That's my paranoid fantasy.

Bob Sutton (17:03)

Maybe that's too paranoid.

James Cham (17:05)

I mean, I think the thing, the version of that.

James Cham (17:07)

Well, look, that happens a lot, right?

James Cham (17:08)

There's lots of cases where we end up with some system.

Bob Sutton (17:11)

Yeah, it's not the problem.

James Cham (17:13)

And then we have to roll back or fix something.

James Cham (17:15)

And then I think the right fear, the variation of what you're talking about, that is scary in part because everyone is so excited about this that we take five steps too far before we realize, oh, this does not work at all, and then have to roll and desperately try to roll back.

James Cham (17:31)

And so I think that's like a.

James Cham (17:33)

That'll happen, right?

James Cham (17:35)

Yeah, that'll happen.

James Cham (17:36)

It's just a question of how bad.

James Cham (17:37)

Hopefully it won't be that bad.

Bob Sutton (17:38)

Yeah, yeah.

Bob Sutton (17:39)

The question is, how big is the failure?

Bob Sutton (17:41)

That still happens because we have complex systems that fail dams.

Bob Sutton (17:44)

Pick what you want.

James Cham (17:45)

Oh, totally, totally.

James Cham (17:47)

And in that way, there's this term that some of the AI folks talk about in terms of being an accelerationist, where they like, oh, they want the thing to go as fast as possible.

James Cham (17:58)

And I kind of want that to happen, but only because I want the problems to happen earlier in local ways.

James Cham (18:05)

We were talking about San Francisco earlier.

James Cham (18:07)

I think of San Francisco as a staging server for the rest of the world.

James Cham (18:11)

We tried these things.

James Cham (18:13)

People see whether.

James Cham (18:13)

And hopefully people see whether they work or not.

James Cham (18:15)

And then the problem is sometimes they don't really.

Bob Sutton (18:18)

No, they really.

James Cham (18:19)

They don't know.

James Cham (18:20)

They can't tell or they don't pay enough.

James Cham (18:21)

But, you know what I mean.

Bob Sutton (18:22)

We were talking about San Francisco.

Bob Sutton (18:24)

The mean is not very interesting.

Bob Sutton (18:25)

It's the variance is the norm.

James Cham (18:27)

That's right.

James Cham (18:27)

That's right.

James Cham (18:28)

That's right.

James Cham (18:28)

And then, like, in theory, a smarter world would say, we won't do all the dumb things that people in San Francisco do or the things that obviously don't work, and we'll do the things that do work.

James Cham (18:36)

Right.

James Cham (18:36)

But then, you know, sometimes the things that don't work are easier to capture in a tweet.

James Cham (18:42)

Right.

Bob Sutton (18:44)

It's also the general argument thing that bad is stronger than good.

Bob Sutton (18:46)

People remember it.

Bob Sutton (18:47)

It spreads quickly.

Bob Sutton (18:48)

And that's a problem with AI too, is it's really easy to focus on the dark part and how terrible it's going to be, because just we as human beings, just those of us who survive to breed are those of us who focus on the bad stuff.

James Cham (19:02)

Survived.

James Cham (19:03)

Right.

James Cham (19:03)

Yeah, that's fair.

James Cham (19:04)

That's fair.

James Cham (19:05)

Okay.

James Cham (19:05)

So there's this, like, I watched, I have some friends at MBER who would do this conference on AI in part because I was, like, very interested in, like, the microeconomics of AI.

James Cham (19:18)

And so then they did one.

James Cham (19:20)

They do one every year, but they did one seven years ago in which Yann Lecun, who's like, you know, one of these giants, like, sort of give some talk.

James Cham (19:26)

And then they ask Danny Kahneman, they say, danny Kahneman, what do you think?

James Cham (19:29)

And Danny Kahneman, who's.

James Cham (19:31)

Honestly, he's like, I don't know what you guys are talking about.

Bob Sutton (19:33)

And.

James Cham (19:33)

And he says a few things.

James Cham (19:34)

And then this is seven years ago, and he says these things, and they're brilliant and shocking, and they are even clearer today, so you should totally dig it up.

James Cham (19:43)

But one of the things he says, he's like, look, Wisdom is about breath.

James Cham (19:51)

And then.

James Cham (19:52)

So there's a way in which if we design the models well, the models will consider more factors, and then that way the models will be wiser than we are, if only we let them.

James Cham (20:01)

And then the other thing is, he said, which just haunts me, and I think he's right, unfortunately, is that the models will offer more emotional sustenance than we will.

James Cham (20:15)

That'll just be easier.

Bob Sutton (20:16)

Danny was a smart guy.

James Cham (20:18)

He was right.

James Cham (20:19)

But he said it seven years ago.

James Cham (20:20)

I read it.

James Cham (20:21)

I was like, that's ridiculous.

James Cham (20:22)

That can't be true.

James Cham (20:23)

And then now, of course, I think that's just a world we feel like right next to, where it's just more reliably nice to me.

James Cham (20:30)

Right.

Bob Sutton (20:32)

Danny's last book, actually, with a couple other folks, which really applies to the times too, was on Noise, not bias.

Bob Sutton (20:40)

And I think when I think of AI, Rebecca and I just spent a few months dabbling and trying to figure out what the hell's going on.

Bob Sutton (20:46)

Noise seems to be the headline, which is there's so much variation, there's so much inconsistent beliefs, there's so many successes, there's so many failures, and people tell such varied narratives.

Bob Sutton (20:58)

It's just like walking into a random scatter of ideas, which is how he find noise.

James Cham (21:03)

Right.

Bob Sutton (21:04)

And so even then, I mean, Danny was so smart.

James Cham (21:07)

Yeah, it was amazing.

Bob Sutton (21:09)

Anyway, so Danny, just amazed, was amazing.

Rebecca Hinds (21:11)

So as we think about noise and change, we've talked about a lot of ways that AI is likely to change aspects of our work.

Rebecca Hinds (21:18)

What's one aspect of our work or how we work that you don't think will change or shouldn't change?

James Cham (21:24)

Okay.

James Cham (21:25)

For better or worse, they will probably continue to be commercial relationships.

James Cham (21:28)

Right?

James Cham (21:29)

People will probably still get paid for something good.

James Cham (21:31)

They will probably still have money as an organizing principle.

James Cham (21:37)

So that'll probably be true.

James Cham (21:38)

I mean, another version is the LLMs get so good at dealing with lots and lots of variables that we end up with some weird barter system.

James Cham (21:45)

That's possible.

Bob Sutton (21:46)

Yeah, that's possible.

James Cham (21:47)

Unlikely.

James Cham (21:48)

So that will remain true.

James Cham (21:50)

There's a good question around whether human systems are dynamic and to what extent humans will continue to shift what we find valuable.

James Cham (22:02)

Right.

James Cham (22:03)

And it's possible that I will value the parasocial relationship more than the social one.

James Cham (22:11)

Right.

James Cham (22:12)

And so that's possible.

James Cham (22:13)

So maybe I was going to make an argument that people still value some human things, but that might not be true.

James Cham (22:19)

Okay.

James Cham (22:19)

So maybe the only thing I'm sure about is that there'll be commercial relationships.

James Cham (22:23)

Right.

James Cham (22:23)

That people that will still continue to have like a single number.

Bob Sutton (22:26)

There are so many money for some variation.

James Cham (22:29)

That's right.

Bob Sutton (22:31)

Cryptocurrency or something.

James Cham (22:32)

Right.

Rebecca Hinds (22:33)

What a great conversation.

Rebecca Hinds (22:34)

If you thought so too, subscribe so you never miss an episode.

Rebecca Hinds (22:38)

Thanks for spending your time with us and we'll see you next time on Intelligence Real and Imagined.

Dr. Rebecca Hinds
Head of the Work AI Institute, Glean
Dr. Bob Sutton
Professor Emeritus at Stanford University
Powered by
See the latest from the Work AI Institute at Glean
See more