Sandra Matz: The Personal Data Privacy Crisis

AI transcript
0:00:02 (upbeat music)
0:00:10 – Hi, I’m Guy Kawasaki.
0:00:12 This is the Remarkable People Podcast.
0:00:14 And as you well know,
0:00:16 we’re on a mission to make you remarkable.
0:00:20 And the way we do that is we bring you remarkable guests
0:00:23 who explain why they’re remarkable
0:00:25 and how they’re remarkable and their remarkable work.
0:00:28 And today’s special guest is Sandra Mott.
0:00:31 – She’s a professor at the Columbia Business School
0:00:35 and she’s gonna talk to us about psychological targeting
0:00:37 and changing mindset.
0:00:39 Congratulations.
0:00:43 Shipping a book is a big, big accomplishment.
0:00:45 Trust me, I know this firsthand, so.
0:00:48 – I know, it feels good when it’s out.
0:00:50 Even though I had a great time writing it.
0:00:52 So I think I probably enjoyed it a lot more
0:00:55 than what I was told by other authors.
0:00:57 So I already enjoyed the process.
0:00:59 – I have written 17 books
0:01:02 and I have told people 16 times
0:01:04 that I am not writing another book.
0:01:06 – Good luck with that one.
0:01:07 – Yeah, exactly.
0:01:08 – I’m just waiting for,
0:01:11 what would he be placing the order for number 18,
0:01:12 if that’s the case?
0:01:19 – Alrighty, so first of all, if you don’t mind,
0:01:21 let me just tell you something kind of off the wall
0:01:25 that your story about how you met your husband
0:01:27 at that speaking event,
0:01:29 that was the closest thing to porn
0:01:32 in a business book that I have ever read.
0:01:35 – And I spared you the details.
0:01:37 It’s actually a lot more to the story.
0:01:39 It’s a good one.
0:01:40 – I was reading that.
0:01:44 I said like, man, where is this going?
0:01:47 Like, is she gonna have this great lesson about how to,
0:01:50 you know, tell men to stick it and get out of my face?
0:01:52 And then I keep reading and it says,
0:01:56 oh, and the night went very, very well.
0:01:57 – What?
0:02:02 – It’s such a fun anecdote in my life,
0:02:02 how I met him.
0:02:05 So it was just like conference.
0:02:07 He was late for the people who have read the book
0:02:08 and I was like, what a jerk.
0:02:10 And I kind of had written him off.
0:02:12 And then as the night progressed
0:02:15 and I learned more about him by spying on him
0:02:18 as part of, in his place.
0:02:20 I was like, interesting guy.
0:02:22 I think I’m gonna give him a second chance.
0:02:23 And we’re married.
0:02:25 We have a kid now who’s one year old.
0:02:27 So it all worked out.
0:02:30 – And is he still meticulously neat?
0:02:32 Or, you know, was that just a demo?
0:02:34 And this is the real thing now.
0:02:35 – No, no, no.
0:02:36 So yeah, as part of the story,
0:02:39 it’s like, one of the first things I learned about him
0:02:41 is that I think he’s borderline OCD
0:02:43 ’cause he just sorts everything.
0:02:47 It’s like the person who sorts his socks by color.
0:02:48 We just moved apartments,
0:02:50 which is with a one year old,
0:02:52 not the most fun thing to do in the world.
0:02:54 And they were like boxes everywhere.
0:02:56 You could barely walk around the apartment
0:02:58 and I just opened one of the drawers
0:03:01 and he had put the cutlery, like perfection.
0:03:05 I’m like, there’s a hundred thousand boxes in this place.
0:03:08 I can barely find anything for the baby,
0:03:11 but I’m really glad that you spent at least an hour
0:03:14 perfecting the organization of the cutlery.
0:03:16 So it’s, he’s still.
0:03:18 – I hope your new place has a dishwasher
0:03:22 so he can load the utensils in the tray.
0:03:23 – Tell me about it.
0:03:24 – That’s exactly what happens.
0:03:27 Yeah, I’m not allowed to touch the dishwasher anymore
0:03:28 ’cause I don’t do it perfectly.
0:03:30 So you’re spot on, yeah.
0:03:33 (laughing)
0:03:35 – So you listeners out there, basically,
0:03:39 we have an expert in psychological targeting
0:03:43 and now she’s explaining how she had absolutely no targeting
0:03:46 in meeting her future husband, right?
0:03:48 – I think I nailed it from the beginning
0:03:51 and that his place, I looked at his,
0:03:54 him being put together and it gave me a pretty good
0:03:56 understanding I think of who he was.
0:03:58 I feel like I know what I signed up for.
0:04:02 – Okay, so this is proof that her theories work.
0:04:06 So I’ve already, you know, said this word,
0:04:08 psychological targeting twice.
0:04:10 So I would really like,
0:04:13 this is an easy question to start you off,
0:04:16 not that we got past the porn part of this podcast,
0:04:21 which is from a psychological targeting perspective.
0:04:25 What’s your analysis of the 2024 election?
0:04:26 – It’s a, I mean, interesting one.
0:04:30 So psychological targeting typically looks at individuals.
0:04:32 So it’s trying to see what can we learn
0:04:34 about the psychological makeup of people,
0:04:36 not by asking them questions,
0:04:38 but really observing what they do, right?
0:04:40 You can imagine in the analog world,
0:04:43 I might look at how someone treats other people,
0:04:46 whether they’re organized as my husband is.
0:04:48 And I think you can learn a lot by making these observations.
0:04:49 That’s true in the offline world.
0:04:51 That’s also true in the online world.
0:04:54 And I think if you just look at them,
0:04:57 presidential candidates, the way that they talk,
0:05:00 if Trump writes in all caps all the time
0:05:04 and doesn’t necessarily give it a second thought
0:05:06 before something comes out on Twitter,
0:05:08 I think that is an interesting glimpse
0:05:11 into what might be going on behind the scenes.
0:05:14 – And do you think that his campaign
0:05:18 did like really great psychological targeting
0:05:20 of the undecided in the middle?
0:05:24 Or, you know, like from an academic perspective,
0:05:29 as a case study, how would you say his campaign was run?
0:05:31 – I talk a lot about psychological targeting
0:05:33 ’cause for me, it’s interesting to understand
0:05:35 how the data translates into something
0:05:37 that we can make sense of as humans, right?
0:05:41 So if I get access to all of your social media data,
0:05:43 an algorithm might be very good at understanding
0:05:45 your preferences and motivations
0:05:46 and then play into these preferences.
0:05:48 But I, as a user, as a human,
0:05:51 can’t really make sense of a million data points
0:05:52 at the same time.
0:05:54 If I translate it into something
0:05:55 that tells me whether you’re more impulsive
0:05:58 or more neurotic or more open-minded,
0:06:00 that just kind of goes a long way in saying,
0:06:03 okay, now I know which topics you might be interested in
0:06:04 in the context of an election
0:06:08 or how I might talk to you in a way that most resonates.
0:06:11 Now, politics is an interesting case, right?
0:06:15 ‘Cause ideally a politician would go knock on every door,
0:06:16 have a conversation with you
0:06:18 about the stuff that you care about,
0:06:19 and obviously they don’t have the time.
0:06:23 So there’s, I think, a lot of potential
0:06:26 of using some of these tools to make politics better,
0:06:29 but obviously, I think the way that some of these tools
0:06:32 were introduced in the context of the 2016 election,
0:06:35 which really shows that the more dark side,
0:06:37 and I don’t know if they’re using
0:06:39 any of these tools on the campaign trail.
0:06:41 I think there are many ways in which you can use data
0:06:46 to drive engagement that’s not necessarily based
0:06:49 on predictions of your psychology at the individual level,
0:06:52 but certainly this idea that the more we know about people
0:06:54 and their motivations, their preferences,
0:06:57 dreams, fears, hopes, aspirations, you name it,
0:07:00 the easier it is for us to manipulate them.
0:07:03 – Well, in politics, as well as marketing,
0:07:06 which you bring up in your book,
0:07:09 I kind of got the feeling that what you’re saying is that,
0:07:13 you know, you psychologically target people
0:07:18 with different messages, but you could have the same product.
0:07:21 So in a sense, you’re saying that, you know,
0:07:22 yes, with the same product,
0:07:26 whether it’s Donald Trump or an iPhone or, you know,
0:07:30 whatever, a Prius, you can change your messaging
0:07:34 to make diverse people buy the product.
0:07:36 So did I get that right?
0:07:39 Or am I like imagining something
0:07:42 that’s kind of nefarious, actually?
0:07:44 – I think it depends on how you think about this, right?
0:07:48 ‘Cause the fact that we talk to people
0:07:49 in different ways all the time.
0:07:52 So imagine a kid who wants the same thing.
0:07:54 The kid wants candy.
0:07:58 The kid knows exactly that they should talk to their mom
0:08:01 in one way and that they should talk to their dad
0:08:02 in a different way, right?
0:08:03 So the goal is exactly the same.
0:08:05 The goal is to get the candy,
0:08:08 but we’re so good as humans,
0:08:10 making sense of who’s on the other side,
0:08:12 understanding what makes them tick,
0:08:14 how do I best persuade them to buy something?
0:08:17 And the same is true, I think, in politics and marketing.
0:08:19 The more that we understand where someone is coming from
0:08:21 and where they want to be in the end,
0:08:23 the easier it is for us to sell a product, right?
0:08:28 So products have the benefit that it’s not just what you buy,
0:08:28 right?
0:08:30 A lot of the times we buy products
0:08:32 because they have like this meaning to us.
0:08:33 They help us express ourselves.
0:08:35 They serve a certain purpose.
0:08:38 And if we can figure out what’s the purpose of a camera
0:08:40 for a certain person, what’s the purpose of the iPhone
0:08:43 for a camera, why do people care about immigration?
0:08:46 A certain take, why do people care about climate change?
0:08:48 Is it because they’re concerned about their kids?
0:08:51 Is it because they’re concerned about their property?
0:08:54 Then I think we just have a much easier way
0:08:55 of tapping into some of these needs.
0:08:57 And whether that’s offline,
0:09:00 when we, again, talk to our three-year-old,
0:09:02 not in the same way that we talk to our boss and our spouse,
0:09:05 or whether that’s market is doing that at scale,
0:09:07 it’s really the more you understand about someone,
0:09:11 the more power you have over their behavior.
0:09:14 – So are you saying that at an extreme,
0:09:16 you could say to like a Republican person,
0:09:20 you know, the reason why we have to control the border
0:09:22 is because of physical security,
0:09:25 where to a liberal, you might say, you know,
0:09:27 there’s a different message,
0:09:32 but in both cases, you want to secure the border,
0:09:37 one for maybe job displacement, another for security.
0:09:38 I mean, it would be different,
0:09:41 but the same product in a sense.
0:09:42 – Yeah, or the same, yeah.
0:09:44 So a hundred percent, there’s all of this research,
0:09:46 and this is actually is not my own.
0:09:48 It’s very similar to psychological targeting.
0:09:52 And in that space, it’s usually called moral reframing
0:09:53 or moral framing.
0:09:56 So the idea that once I understand
0:09:58 your set of moral values, right?
0:10:00 So there’s a framework that kind of describes
0:10:02 these five moral values.
0:10:04 The way that we think about what’s right or wrong
0:10:06 in the world, that’s how I think about it myself.
0:10:10 And some of it is loyalty, fairness, care, purity,
0:10:11 and authority.
0:10:14 And what we know is that across the political spectrum,
0:10:15 so from liberal to conservative,
0:10:18 people place different emphasis on some of these values.
0:10:20 So if you take a liberal,
0:10:22 typically they care about care and fairness.
0:10:24 So if you make an argument about immigration again,
0:10:27 that’s, or climate change does matter,
0:10:29 that’s tapping into these values,
0:10:32 you’re more likely to convince someone who’s liberal.
0:10:34 Now, if you take something like loyalty,
0:10:35 authority, or purity, you’re more likely
0:10:37 to convince someone who’s more conservative.
0:10:39 And for me, the interesting part is that,
0:10:42 as humans, we’re so stuck with our own perspective, right?
0:10:46 If I as a liberal try to convince a conservative
0:10:48 that immigration might be a good thing,
0:10:51 I typically make that argument from my own perspective.
0:10:55 So I might be very much focused on fairness and care,
0:10:56 and it’s just not resonating with the other side,
0:10:58 ’cause it’s not what they’re coming from.
0:11:00 And algorithms, because they don’t have an incentive,
0:11:03 they don’t necessarily have their own perspective
0:11:05 on the world that’s driven by ideology.
0:11:07 It’s oftentimes much easier for them to say,
0:11:11 I try and figure out what makes you care about the world,
0:11:13 what makes you think about what’s right or wrong in the world.
0:11:16 And now I’m gonna craft that argument along those lines.
0:11:19 And what’s interesting for me is that,
0:11:20 depending on how you construe it,
0:11:22 it can either be seen as manipulation.
0:11:24 So I’m trying to convince you of something
0:11:25 that you might not otherwise believe,
0:11:27 but it could also be construed as,
0:11:29 I’m really trying to understand
0:11:30 how you think about the world.
0:11:32 But I’m really trying to understand and engage with you
0:11:35 in a way that doesn’t necessarily come from my point of view,
0:11:37 but is trying to take your point of view.
0:11:41 So it really has, for me, these two sides.
0:11:46 – So I could say to a Republican is the reason why
0:11:51 you wanna support the H1B visa program
0:11:53 is because those immigrants have a history
0:11:55 of creating large companies
0:11:58 which will create more jobs for all of us,
0:12:01 which is a very different pitch.
0:12:03 – Yeah, and so in addition to the fact
0:12:05 that we can just tap into people’s psychology,
0:12:07 there’s also this research that I love.
0:12:08 I think it’s mostly done,
0:12:10 I think in the context of climate change,
0:12:12 but it’s looking at what do people think
0:12:14 the solutions to problems are,
0:12:17 and how does that relate to what they believe in anyway?
0:12:20 If I tell you, well, solving climate change
0:12:23 means reducing government influence,
0:12:25 it means reducing taxes.
0:12:26 Then suddenly Republicans are like,
0:12:28 “Oh my God, climate change is a big problem
0:12:30 “because the solutions are very much aligned
0:12:32 “with what I believe in anyway.”
0:12:33 If you tell that to Democrats,
0:12:35 they’re like, “Actually, it’s not such a big deal
0:12:37 “’cause I don’t really believe in the solution.”
0:12:41 So the way I think that we play with people’s psychology
0:12:43 and how they think about the world
0:12:45 and show up in the world just means
0:12:47 that oftentimes it gives us a lot of power
0:12:50 over how they think, feel, and behave.
0:12:54 – Another point that I hope I interpreted correctly
0:12:57 is like, you know, I’ve been trained so long
0:12:58 to understand the difference
0:13:01 between correlation and causation, right?
0:13:05 So like, if you wear a black mock turtleneck,
0:13:06 so did Steve Jobs.
0:13:08 So you should wear a black mock turtleneck
0:13:10 because you’ll be the next Steve Jobs.
0:13:13 Well, didn’t quite work out that way for Elizabeth Holmes,
0:13:16 but I think you take a different direction.
0:13:17 I just want to verify this.
0:13:21 So you don’t really discuss correlation versus causation.
0:13:24 In a sense, what you’re saying is that
0:13:29 there doesn’t need to be a causative relationship
0:13:34 if there is a predictive relationship that you can harness.
0:13:37 So I don’t know, if for some reason people,
0:13:40 we noticed a lot of people with iPhones by German cars,
0:13:41 well, that’s predictive.
0:13:44 I don’t have to understand why that’s true.
0:13:45 – Yeah, no, totally.
0:13:48 And I’ll give you an example that I think is interesting.
0:13:52 So one of the relationships that I still find fascinating
0:13:54 that we observe in the data that I don’t think
0:13:57 I would have intuited even as a psychologist
0:13:59 is the use of first person pronouns,
0:14:01 like people post on social media
0:14:02 about what’s going on in their life.
0:14:05 And I remember being at this conference,
0:14:07 it’s like a room full of psychologists
0:14:09 and this guy who was really like a leading figure,
0:14:12 Jamie Panner Baker in the space of natural language processing,
0:14:14 he comes up and he just asks the audience,
0:14:17 what do you think the use of first person pronouns?
0:14:21 So just using I, me, myself more often than other people,
0:14:23 what do you think this is related to?
0:14:25 And I remember all of us sitting at the table
0:14:27 and we’re like, oh, it’s gotta be narcissism.
0:14:30 If someone talks about themself constantly,
0:14:33 that’s probably a sign that someone is a bit more narcissistic
0:14:35 and self-centered than other people.
0:14:38 Turns out that it’s actually a sign of emotional distress.
0:14:40 So if you talk a lot about yourself,
0:14:43 that makes it more likely that you suffer
0:14:45 from something like depression, for example.
0:14:49 And now taking a step back, it actually makes sense, right?
0:14:50 If you think back to the last time
0:14:53 that you felt blue or sad or down,
0:14:54 you probably were not thinking about
0:14:56 how to fix the Southern border
0:14:58 or how to solve climate change.
0:14:59 What you were thinking about is like,
0:15:00 why am I feeling so bad?
0:15:02 Am I ever gonna get better?
0:15:03 What can I do to get better?
0:15:05 And this inner monologue that we have with ourselves
0:15:07 just creeps into the language that we use
0:15:10 as we express ourselves on these social platforms.
0:15:14 Now, the causal link is not entirely clear, right?
0:15:16 It could be that I’m just using
0:15:18 a lot more first person pronouns
0:15:19 because I have this inner monologue.
0:15:21 What you see in the language of people
0:15:23 who are suffering from emotional distress
0:15:25 is all of these physical symptoms.
0:15:28 So just being sick, having body aches.
0:15:30 And again, it’s not entirely clear
0:15:32 if maybe you’re having a hard time mentally
0:15:33 because you’re physically sick,
0:15:35 but also maybe you’re physically sick
0:15:37 ’cause you’re having like a hard time
0:15:39 with the problems that you’re dealing with.
0:15:43 So on some level, I don’t even care that much, right?
0:15:45 If I’m just trying to understand and say,
0:15:47 is there someone who might be suffering
0:15:49 from something like depression
0:15:50 who’s currently having a hard time
0:15:52 regulating their emotions?
0:15:54 I don’t necessarily care if it’s going from
0:15:57 physical symptoms to mental health problems
0:15:58 or the other way.
0:16:01 What I care about is if I see these words popping up
0:16:03 or if I see some of these topics popping up,
0:16:06 that’s an increase in the likelihood
0:16:08 that someone is actually having a hard time right now.
0:16:11 Now, I think what is interesting is that
0:16:13 the more causal these explanations get
0:16:14 and these relationships get,
0:16:17 oftentimes they’re a lot more stable.
0:16:20 So it could be that if it’s like a causal mechanism,
0:16:22 and first of all, it allows us to understand
0:16:23 something about interventions,
0:16:26 like how do we actually then help people get better?
0:16:30 And they’re also oftentimes the ones that last for longer
0:16:32 because it’s not something just the fluke and the data
0:16:34 that maybe goes this direction or the other,
0:16:36 but it’s something that is really driving it
0:16:37 on a more fundamental level.
0:16:39 So you’re absolutely right in that,
0:16:41 oftentimes when we think of prediction,
0:16:44 we don’t need to understand which direction it goes in.
0:16:49 It’s still helpful to know if you think of interventions.
0:16:51 – So at a very simplistic level,
0:16:54 could you make the case to a pharmaceutical company?
0:16:57 You know, look at a person’s social media
0:16:59 and if the person is saying aye a lot,
0:17:04 sell them some lorazeprame or some anti-anxiety drugs,
0:17:08 is it that simple?
0:17:10 – I personally probably not go to the pharma companies
0:17:13 and make that proposition, but it is that simple.
0:17:15 And again, one of the points that I make in the book
0:17:17 that is super important to me
0:17:20 is that those are all predictions with a lot of error, right?
0:17:24 So it means that on average, if you use these words more,
0:17:28 you’re more likely to suffer from emotional distress.
0:17:30 That doesn’t mean that it’s terministic.
0:17:32 There’s a lot of error at the individual level.
0:17:36 So if I’m a pharma company and I wanna sell these products,
0:17:38 yeah, on average, I might do better
0:17:39 by targeting these people,
0:17:42 but it still means that we’re not always going to get it right.
0:17:44 And then on the other side, what is interesting for me
0:17:46 is if you think about it,
0:17:48 not from the perspective of a pharma company,
0:17:50 but from the perspective of an individual,
0:17:52 I think there’s ways in which we can acknowledge
0:17:54 the fact that it’s not always perfect, right?
0:17:56 You could have this early warning system
0:17:58 for people who know, for example,
0:18:00 that they have a history of mental health conditions
0:18:02 and they know that it’s really difficult
0:18:05 once they’re at this valley of depression to get out.
0:18:07 So they could have something on their phone
0:18:08 that just tracks their GPS record,
0:18:11 sees that they’re not leaving the house as much anymore,
0:18:15 less physical activity, more user first person pronouns.
0:18:17 And it almost has this early warning system.
0:18:20 It just puts a flag out and says, “It might be nothing.
0:18:22 It’s not a diagnostic tool. There’s a lot of error,
0:18:25 but we see that there’s some deviations from your baseline.
0:18:26 Why don’t you look into this?”
0:18:29 And for me, those are the interesting use cases
0:18:31 where we involve the individual,
0:18:33 acknowledging that there’s mistakes that we make
0:18:35 and the predictions,
0:18:36 but we’re using it to just help them
0:18:39 accomplish some of the goals that they have for themselves.
0:18:48 So speaking of interesting use cases,
0:18:50 would you do the audience a favor
0:18:53 and explain how you help the hotel chain
0:18:57 optimize their offering? ‘Cause I love that example.
0:19:00 – It was one of the first projects
0:19:01 and industry collaborations that we did
0:19:04 when I was still doing my PhD.
0:19:06 And there’s many reasons for why I actually liked the example.
0:19:09 But the idea was that we were approached by Hilton
0:19:11 and we worked with a PR company.
0:19:13 And the idea of Hilton was,
0:19:16 can we use something like psychological targeting?
0:19:19 So really tapping into people’s psychological motivations,
0:19:21 what makes them tick,
0:19:23 what makes them care about vacations and so on
0:19:25 to make their campaigns more engaging
0:19:28 and then also sell vacations
0:19:30 that really resonated with people.
0:19:33 And what I like about the example is that Hilton didn’t say,
0:19:36 well, we’re just gonna run a campaign on Facebook and Google
0:19:39 where we just passively predict people’s psychology
0:19:42 and then we try to sell them more stuff.
0:19:46 They turned it into this mutual two-way conversation
0:19:47 where they said, hey,
0:19:49 we wanna understand your traveler profile.
0:19:51 And for us to be able to do that,
0:19:53 if you connect with your social media profile,
0:19:55 we can run it through this algorithm
0:19:57 that actually we don’t control.
0:19:59 It’s the University of Cambridge is doing it.
0:20:01 We don’t even get to see the data.
0:20:05 But what we can do is we can spit out this traveler profile
0:20:07 and then make recommendations
0:20:09 that really tap into that profile.
0:20:11 So it was this campaign.
0:20:15 And you can imagine that doing that increased like engagement.
0:20:18 People were excited about sharing it with friends.
0:20:21 It was essentially good for the business bottom line.
0:20:24 But it also gave, I think, users the feeling
0:20:27 that it’s a genuine value proposition.
0:20:30 So there was a company that operated first of all
0:20:32 with consent ’cause it was all,
0:20:35 it’s up to you whether you wanna share the data or not.
0:20:37 Here’s like, how does this works behind the scenes?
0:20:39 Here’s what we give you in return.
0:20:42 So it was very transparent with the entire process.
0:20:44 And it was also transparent in terms of
0:20:46 here’s what we have to offer, right?
0:20:48 It’s by understanding your traveler profile.
0:20:50 We can just make your vacation a lot better.
0:20:53 So that’s one of the reasons why I like this example a lot.
0:20:59 – Now, just as a point of clarification,
0:21:01 you said the University of Cambridge, right?
0:21:02 – Yeah.
0:21:05 – Which has nothing to do with Facebook
0:21:08 and Cambridge associates, right?
0:21:10 – With Cambridge Analytica has nothing to do at all.
0:21:13 It was funny ’cause I get mixed up with them all the time.
0:21:16 Not surprising ’cause I got my PhD there on the same topic.
0:21:20 And there was like, I mean, the idea originated there, right?
0:21:23 The idea that we could take someone’s social media profile,
0:21:25 predict things about their psychology,
0:21:29 originated at Cambridge and that’s where it was taken from.
0:21:31 But we were involved and for me,
0:21:33 it’s almost like a point of pride
0:21:36 and like a point that made me think about the ethics a lot
0:21:39 is we helped the journalists break the story.
0:21:41 So when the journalists in, first in Switzerland,
0:21:43 were working on trying to see what happened
0:21:45 behind the scenes of Cambridge Analytica,
0:21:47 we just helped them understand the science.
0:21:49 How can you get all of the data?
0:21:51 How do you translate it into profile?
0:21:55 So yeah, not related to Cambridge Analytica in any way,
0:21:57 other than trying to take them down.
0:21:59 – Okay, so I misspoke.
0:22:02 I said Cambridge associates, not Analytica.
0:22:04 So if you were for Cambridge associates,
0:22:07 if there’s such a thing out there, I’d correct myself.
0:22:08 (laughing)
0:22:09 – I’m not sure.
0:22:13 – So listen, in the United States,
0:22:15 this is a very broad question,
0:22:19 but in the United States, who owns my data?
0:22:21 Me or the companies?
0:22:23 – Well, as you might have imagined,
0:22:24 it’s typically not you.
0:22:27 So the US is an interesting case
0:22:29 ’cause it very much depends on the state that you live in.
0:22:32 So Europe, I would say has the strictest
0:22:34 data protection regulations.
0:22:37 So they very much try to operate on these principles
0:22:39 of transparency and control
0:22:42 and giving you at least the ability to request your own data
0:22:44 to delete it and so on and so forth.
0:22:46 In the US, California is the closest.
0:22:48 So California’s CCPA,
0:22:51 which is the Consumer Protection Act,
0:22:53 I can’t remember the exact name,
0:22:57 but this is like very close to the European Union principles
0:23:00 where you as a producer of data
0:23:03 and even though companies also can hold a copy,
0:23:04 you at least get to request your own data.
0:23:05 In most parts of the US,
0:23:07 the data that you generate,
0:23:09 you don’t even have a shot at getting it
0:23:10 because it sits with the companies
0:23:12 and you don’t even have the right to request it.
0:23:16 So I think we’re a very long way from this idea
0:23:18 that you’re not just the owner of the data,
0:23:23 but it’s also you can limit who else has control to add.
0:23:25 – So I live in California.
0:23:28 So you’re telling me there’s a way that I could go to Meta
0:23:31 or Apple or Google and say, I want my data
0:23:33 and I don’t want you selling it.
0:23:34 – That’s a great question.
0:23:36 So what you can do is you can request a copy of your data.
0:23:37 That’s one thing.
0:23:39 In many states, you can’t even do that.
0:23:42 You might generate a lot of medical data,
0:23:43 social media data and you,
0:23:45 even though you generated it,
0:23:47 you can’t even request a copy.
0:23:50 Now what you can do is you can go to Meta request a copy
0:23:53 and you can also request it to be deleted
0:23:55 or to be transferred for it somewhere else.
0:23:58 Now it’s still really hard to say,
0:24:00 I want to use a service and product.
0:24:02 And this is one of the things that I think makes it really
0:24:05 challenges for people to manage the data properly.
0:24:07 Because it’s a binary choice,
0:24:09 you can say, yeah, I want you to delete my data
0:24:11 and I’m not going to use the service anymore.
0:24:12 But then you also can’t be part of Facebook.
0:24:14 And yes, there are certain permissions
0:24:15 that you can play with.
0:24:16 What is public?
0:24:17 What is not public?
0:24:19 You can even play around with here’s some of the traces
0:24:22 that I don’t want you to use in marketing.
0:24:24 But typically, and this is true for,
0:24:26 I think still Meta and other companies,
0:24:28 it’s usually a binary choice.
0:24:30 Either you use our product with most of your data
0:24:33 being tracked and most of your data being commercialized
0:24:36 in a way that you might not always benefit from.
0:24:38 But you get to use the product for free
0:24:39 or you don’t use it at all.
0:24:41 And I think that’s the dichotomy
0:24:43 that’s really hard for the brain to deal with.
0:24:45 ‘Cause if the trade-off that we have to make as humans
0:24:48 is service, convenience,
0:24:51 the ability to connect with other people in an easy way,
0:24:53 that’s what we’re going to choose over privacy
0:24:55 and maybe a risk of data breaches in the future
0:24:58 and maybe a risk of us not being able
0:24:59 to make our own choices.
0:25:01 So I think there’s now ways
0:25:03 in which you can somehow eliminate that trade-off.
0:25:06 ‘Cause I think if that’s what we’re dealing with,
0:25:08 it’s an uphill battle.
0:25:11 – I need to go dark for a little bit here.
0:25:14 I read in your book about the example of Nazis.
0:25:18 And I just want to know like today,
0:25:22 could the Nazis go to Facebook, Apple and Google
0:25:25 and get enough information from the breadcrumbs
0:25:29 that we leave to track down where all the Jewish people are?
0:25:31 Would that be easy today?
0:25:33 – I think it would be incredibly easy.
0:25:35 And it’s one of these examples in the book
0:25:37 that I think is hard to process
0:25:39 and that’s why it’s so powerful.
0:25:41 I teach this class on the ethics of data
0:25:43 and there’s always a couple of people who say,
0:25:44 “Well, I don’t care about my privacy
0:25:47 ’cause I have nothing to hide and the perks that I get,
0:25:50 they’re so great that I’m willing to give up my privacy.”
0:25:53 And what I’m trying to say is that it’s a risky gamble.
0:25:55 But first of all, it’s a very privileged position
0:25:57 ’cause just because you don’t have to worry
0:25:58 about your data being out there,
0:26:01 doesn’t mean that that doesn’t apply to other people.
0:26:02 So I think in the US,
0:26:05 even the role versus way it’s a Supreme Court decision
0:26:08 to meddle with abortion rights,
0:26:12 I think overnight essentially made women across the US
0:26:14 realize, “Hey, my data being out there
0:26:15 in terms of the Google searches that I make,
0:26:18 my GPS records showing where I go,
0:26:20 me using some period tracking apps,
0:26:21 it’s incredibly intimate.”
0:26:24 And it could overnight be used totally against me.
0:26:28 So the example that you mentioned about Nazi Germany
0:26:30 is such a powerful one
0:26:33 ’cause it shows that leadership can change overnight.
0:26:34 And I care so much about it
0:26:36 ’cause I obviously grew up in Germany.
0:26:38 So it was a democracy in 1938.
0:26:40 And then the next year it wasn’t.
0:26:42 And what we know is that atrocities
0:26:44 within the Jewish community across Europe
0:26:47 totally dependent on whether religious affiliation
0:26:48 was part of the census.
0:26:50 So you can imagine if you have a country
0:26:52 where whether you’re Jewish or not
0:26:54 is written in the census,
0:26:57 all that Nazi Germany had to do is go to city hall,
0:26:58 get hold of that census data
0:27:00 and find the members of the Jewish community
0:27:03 made it incredibly easy to track them down.
0:27:06 But of course you don’t even need that census data anymore
0:27:09 ’cause you can now have all of this data that’s out there
0:27:10 that allows us to make these predictions
0:27:13 about anything from political ideology,
0:27:15 sexual orientation, religious affiliation,
0:27:17 just based on what you talk about on Facebook.
0:27:20 And even you could make the argument
0:27:22 that maybe it’s the leaders of those companies
0:27:25 handing over the data voluntarily.
0:27:28 And I think we’ve even seen in the last couple of days
0:27:31 how there’s like this political shifts in leadership
0:27:33 when it comes to the big tech companies.
0:27:35 But even if they weren’t playing the game,
0:27:38 it would have been easy for a government to just replace
0:27:41 that C-suite executives with new ones
0:27:43 that are probably much more tolerant to
0:27:44 some of the requests that they have.
0:27:46 And I think it’s terrifying.
0:27:47 And I think it’s a good example
0:27:50 for why we should care about personal data.
0:27:55 – Okay, so what you’re saying is,
0:27:57 if I look at pictures of the inauguration
0:28:02 and I see Apple, Google, Meta, Amazon up on stage.
0:28:07 And so now the government can say,
0:28:10 you know, according to Apple,
0:28:13 you were in Austin and then you landed in an SFO.
0:28:16 And then according to your visa statement,
0:28:17 you know, you purchased this.
0:28:19 And according to your phone’s GPS,
0:28:21 you went to a Planned Parenthood
0:28:23 in San Francisco, California.
0:28:26 So we suspect you of going out of state
0:28:27 to getting an abortion.
0:28:30 So we’re opening up an investigation of you.
0:28:33 That’s all easy today.
0:28:34 – I think it’s very easy.
0:28:37 And again, I’m not saying that the leaders
0:28:40 of those big tech companies are sharing the data right now,
0:28:41 but it’s certainly possible.
0:28:44 And for me, there’s like this thing that I have in the book
0:28:46 is data is permanent and leadership is.
0:28:48 And right, so once your data is out there,
0:28:50 it’s almost impossible to get it back.
0:28:52 And you don’t know what’s gonna happen tomorrow.
0:28:56 Even if Zuckerberg is not willing to share the data,
0:28:59 there could be a completely new CEO tomorrow
0:29:01 who might be a lot more willing to do that.
0:29:05 So I think that the notion that we don’t have to worry
0:29:07 in the here and now about our data being out there
0:29:10 is just a very short-sighted notion.
0:29:12 And ideally we can find a system.
0:29:15 And I think there are ways now in which we can get
0:29:17 some of these perks and some of the benefits
0:29:20 and they come from using data without us necessarily having
0:29:23 to collect the data in a central server.
0:29:24 – Okay, so if I’m listening to this
0:29:27 and I’m scared stiff because, you know,
0:29:29 yes, you could look at what I do.
0:29:31 You could look at, I went to the synagogue
0:29:34 or I went to the, you know, temple or whatever.
0:29:36 So yeah, and you’re right.
0:29:39 Any of those people could replace and who knows.
0:29:41 So then what do I do?
0:29:46 – I do think that people should be to some extent scared.
0:29:48 So I’m really trying to not say that technology,
0:29:52 it’s all, like we’re all doomed because the data is out
0:29:53 there and technology can be used too many.
0:29:55 But I think there’s like many good use cases,
0:29:58 but I do think we should be changing the system
0:30:00 in a way that protects us from these abuses.
0:30:03 And the one thing that I describe in the book,
0:30:06 which I think we’re actually seeing a lot more of,
0:30:08 but just not that many people know of,
0:30:11 are these technologies that allow us to benefit from data
0:30:13 without necessarily running the risk
0:30:15 of a company collecting it centrally.
0:30:17 So what I mean is, and there’s a technology
0:30:19 that’s called federated learning.
0:30:22 And you can imagine the example that I give
0:30:24 is take medical data.
0:30:27 So if we wanna better understand disease
0:30:30 and we wanna find treatment that work for all of us,
0:30:32 not just the majority of people who usually
0:30:34 the pharma companies collect data of,
0:30:37 but like we wanna know, given my medical history,
0:30:39 given my genetic data, here’s what I should be doing
0:30:42 to make sure that I don’t get sick in the first place
0:30:44 or I can treat a disease that’s either rare
0:30:46 or not as easily understood,
0:30:49 we would all benefit from pooling data
0:30:51 and better understanding disease.
0:30:52 Now there’s a way in which you can say,
0:30:56 instead of me sending all of this data to a central server
0:30:59 and now this entity that collects all of the data,
0:31:00 they have to safeguard it.
0:31:03 Same way that Facebook is supposed to safeguard your data
0:31:05 against intrusion from the government.
0:31:08 Instead of having to sit in the central server,
0:31:10 what we can do is we can make use of the fact
0:31:12 that we all have supercomputers,
0:31:14 but that might be your smartphone.
0:31:16 Your smartphone is so much more powerful
0:31:18 than the computers that we used to launch rockets to space
0:31:19 a few decades ago.
0:31:22 So what this entity that’s trying to understand disease
0:31:25 could do is they could essentially send the intelligence
0:31:28 to my phone or ask questions from my data
0:31:30 and say, okay, here’s like how we’re tracking your symptoms.
0:31:32 Here’s what we know about your medical history,
0:31:35 but that data lives on my phone.
0:31:37 And all I’m doing is I’m sending intelligence
0:31:40 to the central entity to better understand the disease.
0:31:42 Apple Siri, for example, is trained that way.
0:31:44 So instead of Apple going in
0:31:46 and capturing all of your speech data
0:31:49 and centrally collecting it right now,
0:31:51 Apple would be one of these companies
0:31:54 who has to protect it now and tomorrow.
0:31:56 And they just send the model to your phone.
0:31:58 So they send Siri’s intelligence to your phone.
0:32:00 It listens to what you say.
0:32:03 It gets better at understanding, gets better at responding.
0:32:05 And instead of you sending the data,
0:32:07 it essentially just sends back a better model.
0:32:10 It learns, it updates, sends back the model to Siri.
0:32:13 And now everybody benefits ’cause we have a better speech.
0:32:14 And that’s a totally different system
0:32:16 ’cause we don’t have to collect the data
0:32:18 in a central spot and then protected.
0:32:22 – But Sandra, I mean, the point that you just made
0:32:27 is that, yeah, Tim Cook may be saying that to us now.
0:32:29 We’re only sending you the model
0:32:31 and all your data is staying on your phone,
0:32:34 but tomorrow’s Apple CEO
0:32:36 could have a very different attitude, right?
0:32:38 So how do we know if they’re only still
0:32:41 sending the model right now?
0:32:42 – So I think it’s a great question.
0:32:44 And it’s funny that you mentioned Apple in that space
0:32:46 ’cause I think they’re thinking about it this way.
0:32:50 So again, I would much rather have Tim say,
0:32:53 we’re only gonna locally process on your phone
0:32:55 and that even if they change it tomorrow,
0:32:57 what I’m mostly worried about
0:33:00 is that they collect my data today under Tim Cook
0:33:02 with the intention of making my experience better.
0:33:05 They collect it today and then tomorrow there’s a new CEO
0:33:08 ’cause now that CEO can just go back into the existing data
0:33:11 and make all of these inferences that we talked about
0:33:13 that are very intrusive and we don’t want to be out there.
0:33:15 At least even if Apple decides tomorrow
0:33:18 to shift from that model to a new one,
0:33:20 that’s gonna be publicly out there.
0:33:23 So if that happens, at least people can start from scratch
0:33:25 and decide whether they still want to use Apple products
0:33:26 or not.
0:33:29 My main concern is that all the data that gets collected
0:33:31 and now leadership changes.
0:33:32 – Wow.
0:33:35 Okay, speaking of collected data,
0:33:39 you mentioned an example of a guy who applied to a store
0:33:41 and he took a personality test
0:33:46 and the personality test yielded let’s say undesirable traits.
0:33:50 And so he didn’t get that job
0:33:52 and that personality test stuck with him
0:33:56 and kind of hurt his employment in the future too.
0:33:58 So what’s the advice?
0:34:02 Don’t take the personality test or lie on the personality test.
0:34:04 What’s the guy supposed to do
0:34:07 if he’s required to take a personality test
0:34:08 to apply for a job?
0:34:11 – Yeah, and you’re really going to other dark places
0:34:13 but I think which I think is important
0:34:15 ’cause for me, this example
0:34:18 and this one is not even using predictive technology, right?
0:34:20 So this one is a guy sitting down
0:34:22 and admitting that I think in his case,
0:34:24 he was like suffering from bipolar disorder.
0:34:27 So kind of sends the score on neuroticism
0:34:29 which is one of the personality traits
0:34:32 that kind of says how you regulate emotions through the roof.
0:34:34 And because he admitted to that,
0:34:38 he was essentially almost discarded from all of the jobs
0:34:40 that had like a customer facing interface
0:34:42 ’cause companies were worried that he wouldn’t be dealing
0:34:45 and well with people who come and complain.
0:34:48 Now, the reason for why I think this example is important
0:34:52 is it just means who other people think we are
0:34:55 kind of closes some doors in our lives, right?
0:34:56 So sometimes it opens doors.
0:34:59 If someone thinks that you’re the most amazing person
0:35:01 and you absolutely deserve a loan,
0:35:04 maybe you have opportunities that other people don’t have
0:35:07 but oftentimes that the danger comes in
0:35:10 when someone thinks that we have certain traits
0:35:13 that then would lead to behavior that we don’t wanna see.
0:35:16 And now in the context of self-reported personality tests,
0:35:19 at least you have like some say over what that image is.
0:35:23 If you take it to an automated prediction of an algorithm
0:35:24 and coming back to this notion
0:35:26 that those algorithms are pretty good
0:35:28 at understanding of psychology,
0:35:29 but they’re certainly not perfect.
0:35:31 So now you suddenly live in a world
0:35:33 where someone make a prediction about you
0:35:35 based on the data that you generate,
0:35:37 you never even touched that prediction
0:35:39 ’cause you don’t even get to see it.
0:35:40 They predict that you’re neurotic
0:35:42 and maybe they even get it wrong.
0:35:44 Maybe you’re one of the people where the algorithm
0:35:46 makes a mistake and gets it wrong.
0:35:47 And now you suddenly you’re excluded
0:35:50 from all of these opportunities for jobs, loans and so on.
0:35:53 And so I think for me, this notion that there’s someone
0:35:55 who passively tries to understand who you are
0:35:59 and then takes action that again, sometimes open doors,
0:36:01 sometimes it’s incredibly helpful
0:36:03 because maybe we connect you with mental health support
0:36:07 but at other times it might also close doors
0:36:09 in a way that you don’t even have insights to.
0:36:10 And for me, that’s the scary part
0:36:12 where I feel like we’re losing control
0:36:15 over essentially our lives.
0:36:18 – Wait, but are you saying that you should refuse
0:36:22 to take the personality test or you should lie?
0:36:26 – So in the case of the personality test,
0:36:27 first of all, it’s not a good practice.
0:36:29 So as a personality psychologist,
0:36:31 the way that we think of these personality tests
0:36:34 is that it shouldn’t be an exclusion criteria.
0:36:37 So I think that what they’re meant to do
0:36:40 is to say, here’s certain professions
0:36:43 that you might just be more suited for.
0:36:44 ‘Cause if you’re an introvert
0:36:46 who kind of hates dealing with other people
0:36:49 and you’re constantly at the forefront of like a sales pitch,
0:36:51 you’re probably not gonna enjoy it as much.
0:36:53 They were never really meant to say,
0:36:55 you got a low score on conscientiousness
0:36:56 and we’re gonna exclude you.
0:36:58 It’s also very short-sighted
0:37:01 because technically what makes a company successful
0:37:04 and what makes team successful is to have many people
0:37:05 who think about the world differently.
0:37:08 So I have this recent research that’s still very preliminary,
0:37:11 but it’s looking at startups
0:37:13 and it just looks at how quickly do they manage
0:37:15 to hire people with all of these different traits.
0:37:17 So you can come together and you can say, well,
0:37:19 but I think this way and then you think this way
0:37:21 and we all bring a different perspective to the table.
0:37:23 And they’re usually more successful.
0:37:25 So this notion that companies just say,
0:37:27 here’s a trait that we don’t wanna see.
0:37:29 It is very short-sighted.
0:37:30 What we do know, and this is,
0:37:33 I promise coming back to your question,
0:37:35 is that saying that you don’t wanna respond
0:37:38 to a questionnaire is typically seen as the worst sign.
0:37:41 So there was this study where they looked at things
0:37:42 that people don’t like to admit to, right?
0:37:44 I think it was like stuff about health,
0:37:46 stuff about people’s sexual preferences
0:37:50 and saying, I don’t wanna answer the question is worst
0:37:53 and hitting the worst option on the menu.
0:37:55 So I absolutely agree that in that case,
0:37:58 the guy essentially didn’t have a shot,
0:38:00 but the problem is once it’s recorded,
0:38:02 he didn’t even get to take the test again
0:38:04 because the results were just shared
0:38:06 from company to company.
0:38:09 – So what I hear you say is lie.
0:38:14 – In this case, frankly, if it had been me,
0:38:15 I probably would have lied.
0:38:17 If I had known that this is,
0:38:19 if the company is making the mistake
0:38:22 of using the test in that way,
0:38:26 what I would recommend to people taking the test is,
0:38:29 yeah, like think about what the company wants to hear.
0:38:30 – Okay.
0:38:33 – Which is harder to do with data, by the way.
0:38:36 It’s funny ’cause oftentimes when we think of predictions
0:38:39 of our psychology based on our digital lives,
0:38:40 we think of social media and it’s always,
0:38:43 but I can to some extent manipulate
0:38:45 how I portray myself on social media.
0:38:47 That’s true for some of these explicit identity claims
0:38:50 that we think about and have control over.
0:38:51 There’s so many other traces.
0:38:53 Take your phone again.
0:38:58 The fact like my thing is that I’m not the most organized
0:38:59 person even though I’m German.
0:39:02 So I think I was expelled for a reason.
0:39:06 And I don’t organize my cutlery the way that my husband does.
0:39:09 And would I admit to this happily on a personality test
0:39:11 that like in the context of an assessment center,
0:39:12 probably not, right?
0:39:14 If someone gives me the question there
0:39:16 that says I make a mess of things,
0:39:18 would I be inclined to say I strongly agree?
0:39:20 Maybe not ’cause I understand
0:39:21 that’s probably not what they want to hear.
0:39:23 Now, if they tap into my data,
0:39:26 they see that my phone is constantly running out of battery,
0:39:28 which is like one of these strong predictors
0:39:30 of you not being super conscientious.
0:39:34 I constantly, I go to the deli on the corner five times a day
0:39:36 ’cause I can’t even plan ahead for the next meal.
0:39:37 And I constantly run to the bus.
0:39:40 So if someone was tapping into my data,
0:39:42 they would understand 100%
0:39:44 that I’m not the most organized person.
0:39:46 So there’s something about this data world
0:39:48 and all of these traces that we generate,
0:39:52 which are in a way much harder to manipulate
0:39:54 than a question on a questionnaire.
0:39:57 – Well, and now people listening to this podcast
0:40:02 are thinking, how many times did I use the pronoun I?
0:40:06 Oh my God, I’m telling people that I have, you know,
0:40:07 depression and stuff.
0:40:10 – And again, it’s not deterministic.
0:40:13 So you might be using a lot of I
0:40:15 because something happened that you want to share.
0:40:18 It’s just like on average, it increases your likelihood.
0:40:21 – Up next on Remarkable People.
0:40:24 – If I wanted to get a portfolio, a data portfolio,
0:40:25 on most of the people,
0:40:27 I would be able to get it really cheaply.
0:40:29 And that’s something that, again,
0:40:32 I think most of us or all of us should be worried about.
0:40:35 And you do see use cases where policymakers
0:40:37 are actually waking up to this reality.
0:40:40 There was this case of a judge actually across the bridge
0:40:41 from here in New Jersey,
0:40:44 whose son was murdered by someone
0:40:46 that she litigated against in the past.
0:40:48 They found her data online from data brokers,
0:40:51 tracked her down, and in this case, killed her son.
0:40:55 (gentle music)
0:40:59 – Thank you to all our regular podcast listeners.
0:41:02 It’s our pleasure and honor to make the show for you.
0:41:04 If you find our show valuable,
0:41:08 please do us a favor and subscribe, rate, and review it.
0:41:11 Even better, forward it to a friend,
0:41:13 a big mahalo to you for doing this.
0:41:18 – Welcome back to Remarkable People with Guy Kawasaki.
0:41:22 So you had a great section about how,
0:41:25 by looking at what people have searched Google for,
0:41:27 you can tell a lot about a person
0:41:30 or at least draw conclusions.
0:41:35 So do you think prompts will have the same effect?
0:41:37 Like, you know, what I asked chat,
0:41:41 GPT is a very good window into what I am.
0:41:42 – I think so, right?
0:41:44 And I don’t necessarily, I think it’s prompts.
0:41:46 I think it’s questions that we have.
0:41:47 And if you think about Google,
0:41:51 there’s questions that I type into the Google search bar
0:41:54 that I wouldn’t feel comfortable asking my friends
0:41:55 or even sharing with my spouse.
0:41:57 So it’s like this very intimate window
0:41:59 into what is top of mind for us
0:42:02 that we might not feel comfortable sharing with others.
0:42:03 Yeah, so I was actually,
0:42:04 which I thought was so interesting
0:42:06 ’cause I was part of this.
0:42:09 It was like a documentary about artistic
0:42:11 and what they did is they invited a person.
0:42:13 So they found a person online.
0:42:15 They looked at all of her Google searches
0:42:19 and then they recreated her life all the way from,
0:42:20 here’s the job that she took,
0:42:22 kind of suffered from anxiety
0:42:24 and the feeling that she wasn’t good enough
0:42:26 in the space that she was working in,
0:42:29 all the way to her becoming pregnant
0:42:30 and then having a miscarriage.
0:42:33 And they kind of recreated her life with an actress.
0:42:35 And then at some point bring in the real person
0:42:37 and the person watches the movie
0:42:39 and you can see how just over time,
0:42:43 she realizes just how intimate those Google searches are
0:42:46 ’cause what the documentary team had created,
0:42:49 the life that they had recreated was so close
0:42:50 to her actual experience.
0:42:52 And again, just by looking at their data.
0:42:54 So for me, it was a nice way of showcasing
0:42:57 that it’s really not just this one data point
0:42:58 or a collection of data points,
0:43:02 but it’s a window into our lives and our psychology.
0:43:05 – And not to get too dark,
0:43:08 but the CEO of Google was on the stage, right?
0:43:12 So what happens when generative AI takes over
0:43:17 and the AI is drafting my email,
0:43:21 drafting my responses and to take an even further step,
0:43:25 what happens when it’s my agent answering for me?
0:43:28 Then is it still as predictive
0:43:31 or will the agent reflect who I really am
0:43:32 or it throws everything off
0:43:36 because it’s not guy answering anymore?
0:43:37 – So to me, that’s a super interesting question.
0:43:40 First of all, in a way like generative AI
0:43:42 democratized the entire process.
0:43:44 So when I started this research,
0:43:48 we had to get a data set that takes your digital traces.
0:43:50 Let’s say what you post on social media
0:43:52 and maybe a self-report of your personality.
0:43:55 And then we train a model that gets from social media
0:43:56 to your personality.
0:43:59 Now I can just ask chat GPT and say,
0:44:01 hey, here are guys Google searches.
0:44:02 Here’s what he bought on Amazon.
0:44:05 Here’s what we talked about on Facebook.
0:44:07 What do you think is his big five personality traits?
0:44:10 What do you think are his moral values?
0:44:11 What do you think is again,
0:44:12 like some of these very intimate traits
0:44:14 that we don’t want to share?
0:44:15 And it does a remarkable job.
0:44:17 It’s never been trained to do that,
0:44:18 but because it’s read the entire internet,
0:44:21 it has to understand so much about psychology.
0:44:23 And then obviously taking it to the next level,
0:44:25 it’s not just understanding,
0:44:28 but also replicating your behavior.
0:44:32 And the one thing that I’m most concerned about,
0:44:33 aside from like manipulative,
0:44:35 it’s just that it’s going to make us so boring.
0:44:37 If these language models,
0:44:39 they’re very good at coming up with an answer
0:44:43 that works reasonably well, like 80%.
0:44:45 But it’s very unlikely that it comes up with something
0:44:47 like super unique that we’ve never thought about,
0:44:49 that makes us different from other people.
0:44:51 So I think what happens is that we’re just going to see
0:44:54 more and more of who the AI believes we are.
0:44:57 ‘Cause it’s essentially almost like the solidified system
0:44:59 of here’s who I think guy is,
0:45:01 and now I’m just optimizing.
0:45:02 And in the way that humans learn,
0:45:05 there’s this trade off between exploitation.
0:45:08 So that is doing the stuff that you know is good for you.
0:45:11 So if you think about restaurant choices,
0:45:14 you can either go to the same restaurant time and again,
0:45:15 because you know that you like it.
0:45:17 So there’s not going to be any surprise.
0:45:19 It’s going to be a good experience.
0:45:21 But the second part of human learning
0:45:23 and experience is the exploration part.
0:45:25 And it exposes you to risk,
0:45:26 because maybe you go to a restaurant
0:45:28 and it turns out to be not great
0:45:29 and you would have been better off
0:45:31 going to your typical choice.
0:45:33 But maybe you actually also stumble on a restaurant
0:45:35 that you love.
0:45:36 And for that, you had to take the risk
0:45:38 and explore something new.
0:45:40 And my worry with these AI systems
0:45:42 and most types of personalization
0:45:45 is that they very much focus on exploitation.
0:45:47 They take what you’ve done in the past,
0:45:47 who they think you are,
0:45:50 and they try to give you more of that.
0:45:52 But you don’t get like the fun parts of exploring.
0:45:54 It’s like Google Maps is amazing
0:45:57 at getting you from A to B most efficiently,
0:46:00 but you also never stumble upon these cute little coffee shops
0:46:01 that you didn’t know were there before
0:46:03 because you got lost.
0:46:05 And for me, that’s in a way the danger
0:46:07 of having these systems replaces.
0:46:10 Is that just gonna make us basic and boring?
0:46:15 – What if I ask the opposite question,
0:46:19 which is I want to help companies be more accurate
0:46:23 in predicting my choices, right?
0:46:25 So I wanna tell Google,
0:46:29 stop sending me world wrestling news and Google news
0:46:32 and stop telling me about the Pittsburgh Steelers
0:46:36 and stop sending me ads for trucks
0:46:38 ’cause I don’t want a truck and I don’t want a Tesla.
0:46:42 And I wanna make a case that what if you want companies
0:46:44 to understand you better, then what do you do?
0:46:47 – First of all, I think it should be an option, right?
0:46:49 So there should be two different modes for you guys
0:46:51 that says right now I’m trying to explore.
0:46:53 Right now I just wanna see something
0:46:55 that’s different to what I typically want.
0:46:57 But also now I’m in this mode
0:46:59 where I just want you to know exactly what I’m looking for.
0:47:01 And I don’t want you to send me the camera
0:47:03 even though I was not interested in the camera
0:47:05 for the last three weeks.
0:47:08 So in this case, I think what companies can do,
0:47:11 which is what they I think oftentimes don’t do enough of.
0:47:14 So it’s like having a conversation with you
0:47:17 that kind of allows you to interact with the profile.
0:47:19 Most of the time they just passively say,
0:47:20 here’s who I think guy is
0:47:23 and now we’re optimizing for their profile.
0:47:24 But if they get it wrong,
0:47:26 there’s no way for you to say no, no, no,
0:47:28 why don’t you just take out this prediction
0:47:30 that you’ve made ’cause it’s not accurate,
0:47:32 which is annoying for me ’cause now as you said,
0:47:34 you get like ads for wrestling
0:47:36 that you might not be interested in at all.
0:47:37 And it’s also bad for business
0:47:39 ’cause now they’re optimizing for something
0:47:41 that is not who you are.
0:47:43 So I think first of all, give people the choice
0:47:45 whether they wanna be in an explorer mode
0:47:47 or an exploitation mode.
0:47:50 And then second part is even within the exploitation mode
0:47:51 where we’re just trying to optimize
0:47:53 for who we think you are,
0:47:56 give people the choice and say, no, you’re wrong.
0:47:57 I wanna correct that.
0:47:59 It’s good for the user and it’s good for the company.
0:48:03 – Well, if anybody out there is listening
0:48:05 and embraces this idea,
0:48:09 I suggest you not call it exploitation mode,
0:48:12 maybe optimization mode might be a more pleasant marketing.
0:48:14 – Personalization mode, yeah, that’s true, that’s true.
0:48:16 – Personalization mode, yeah.
0:48:19 Okay, so some three short,
0:48:21 tactical and practical questions.
0:48:23 So knowing all that you know,
0:48:26 and I think we went dark a few times
0:48:28 and show people the risk here.
0:48:33 So do you use email, messages, WhatsApp or signal?
0:48:36 What do you use personally?
0:48:37 – I mostly use WhatsApp.
0:48:38 First of all, it’s encrypted,
0:48:41 but then it also just what everybody in Europe uses.
0:48:44 So I wouldn’t even give myself any credit for that.
0:48:46 And it’s funny ’cause I think the fact
0:48:48 that I’ve become a lot more pessimistic over the years
0:48:50 has to do with my own behavior.
0:48:53 So I know that we can be tracked all the time
0:48:56 and I still mindlessly say yes to all of the permissions
0:48:57 and so on and so forth.
0:48:59 So I think we just don’t have the time
0:49:02 and the mental capacity to do it all by ourselves.
0:49:04 There’s only 24/7 in a day.
0:49:06 And I’d much rather spend a meal with my family
0:49:08 than going through all the terms and conditions and permission.
0:49:11 So I think if it’s just up to us,
0:49:13 it’s an unfair battle that we don’t stand a chance.
0:49:16 – And why, of all people in the world,
0:49:18 would you not default to signal
0:49:21 because it’s encrypted both the message
0:49:24 and the meta information?
0:49:26 – It’s mostly because not that many of my friends
0:49:27 are using it.
0:49:29 So again, in this case, it would be a trade-off
0:49:31 between I get protected more,
0:49:33 but there’s also like a downside
0:49:35 because I can’t reach out to the people
0:49:36 that I want to reach out.
0:49:38 And I feel like if that’s the trade-off,
0:49:40 the brains of most people will gravitate to,
0:49:43 I’m just gonna get all of the convenience that I want.
0:49:46 – Okay, second short question is,
0:49:49 when you use social media,
0:49:52 do you use it like read only and you don’t post,
0:49:54 you don’t comment and don’t like
0:49:57 or like are you all in on social media
0:50:00 and dropping breadcrumbs all over the place?
0:50:02 – I think even if you don’t use social media,
0:50:05 even if I was completely absent from social media,
0:50:07 I would still be generating breadcrumbs all the time
0:50:09 ’cause I have a credit card and I have a smartphone
0:50:11 and there’s facial recognition.
0:50:12 I just don’t want people to think
0:50:15 that social media is the only way to produce traces.
0:50:17 Now I don’t actively use it as much,
0:50:21 but not because I know that I shouldn’t be doing it.
0:50:22 It’s just because it’s so much work.
0:50:26 I feel like I much rather have interesting offline conversations
0:50:29 that thinking about what I should post on X
0:50:31 and some of the other ones.
0:50:35 So it’s a different reason than worries about privacy.
0:50:39 – Okay, now is the logic that, yes,
0:50:41 Google knows something, Apple knows something,
0:50:44 Meta knows something, X knows something,
0:50:45 everybody knows something,
0:50:47 but nobody knows everything.
0:50:51 So the fact that it’s all sort of siloed
0:50:55 keeps me safe or is that a delusion?
0:50:56 – I think it’s probably a delusion.
0:51:00 So my argument would be that they have most of these traces.
0:51:03 So if you think of applications, again,
0:51:05 like when you download Facebook here,
0:51:07 it asks you to tap into your GPS records,
0:51:10 into your microphone, into your photo gallery.
0:51:12 You use Facebook to log into most of the services
0:51:14 that you’re using elsewhere.
0:51:17 So they have a really holistic picture
0:51:20 of what your life looks like across all of these dimensions.
0:51:21 And by the way, they also have it for users
0:51:24 who don’t use Facebook because it’s so cheap now
0:51:27 to buy these data points from data brokers
0:51:31 that if I wanted to get a portfolio, a data portfolio,
0:51:32 on most of the people,
0:51:34 I would be able to get it really cheaply.
0:51:36 And that’s something that, again,
0:51:39 I think most of us or all of us should be worried about.
0:51:42 And you do see use cases where policymakers
0:51:43 are actually waking up to this reality.
0:51:47 There was this case of a judge actually across the bridge
0:51:49 from here in New Jersey, whose son was murdered
0:51:52 by someone that she litigated against in the past.
0:51:55 They found her data online from data brokers,
0:51:58 tracked her down, and in this case, killed her son,
0:52:00 Biden signed something into a fact
0:52:03 that now protects judges from having their data out there
0:52:05 with data brokers, which makes me think
0:52:07 if we do this for judges and we’re concerned
0:52:09 that we can easily buy data about judges,
0:52:12 why not protect everybody else?
0:52:16 I think there’s a good point to be made that data on us
0:52:18 is so cheap and available from different sources
0:52:21 that even if you don’t use social media,
0:52:23 it’s easy to get your hands on.
0:52:27 – You introduced the concept in the last part of your book,
0:52:29 which I don’t quite understand.
0:52:33 So please explain what a data co-op does.
0:52:36 – Yeah, it’s one of my favorite parts of the book, actually,
0:52:38 ’cause it thinks of how do you help people
0:52:39 make the most of their data, right?
0:52:43 So we’ve talked a lot about the dark sides,
0:52:44 and I think regulation is needed
0:52:48 if we wanna try to prevent the most egregious abuses,
0:52:50 but it doesn’t really give you a way of,
0:52:53 first of all, managing your data in the absence of regulation,
0:52:54 and it also doesn’t give you a way
0:52:56 to make the most of it in a positive way.
0:53:01 So data co-ops are essentially these member-owned entities
0:53:04 that help people who have a shared interest in using data
0:53:06 to both protect it and make the most of it.
0:53:09 So my favorite example is one in Switzerland
0:53:11 that’s called MyData, and they’re focused on the medical space.
0:53:14 So one of the applications that they have
0:53:17 is working with MS patients.
0:53:19 So patients who suffer from multiple sclerosis,
0:53:20 which is one of these diseases
0:53:22 that, again, is so poorly understood
0:53:24 ’cause it’s determined by genetics,
0:53:25 and it’s determined by your medical history,
0:53:27 by your environment, and what they do
0:53:30 is they have a co-op of people.
0:53:34 So patients who suffer from MS and healthy controls
0:53:36 that own the data together.
0:53:39 So it’s a little bit similar to the financial space
0:53:41 where you oftentimes have entities
0:53:43 that have fiduciary responsibilities.
0:53:47 So they’re legally obligated to act in your best interest.
0:53:50 So data co-ops are entities that are owned by the members.
0:53:53 They are legally obligated to act in their best interest,
0:53:56 and now you can imagine, in the case of the MS patients,
0:53:57 they can pool the data,
0:53:59 they can learn something about the disease,
0:54:01 and they can also then, in this case,
0:54:03 work with doctors of the patients
0:54:05 and say, here’s something that we’ve learned from the data.
0:54:08 This treatment might be particularly promising
0:54:10 for a patient at this stage with these symptoms.
0:54:12 Why don’t you try this?
0:54:15 So the people benefit immediately,
0:54:16 and also because they’re now together,
0:54:20 they can hire experts that help them manage their data,
0:54:23 think about, well, here’s maybe some of the companies
0:54:24 that we wanna share the data with,
0:54:26 but maybe we do it in a secure place
0:54:28 that doesn’t require us to send all of the data.
0:54:30 So these data co-ops, for me,
0:54:33 is just like a new form of data governance
0:54:35 that gives us, I think of it as allies.
0:54:38 So if we have a way that we wanna use data,
0:54:40 we need other people with a similar goal
0:54:42 so that we make data, first of all, more valuable,
0:54:45 ’cause if I have my data, my medical history
0:54:47 and my genetic data as an MS patients,
0:54:49 doesn’t help me at all, I need these other people,
0:54:53 but it’s not coming together as a pharma company
0:54:56 that’s grabbing all of this data and then making profits,
0:54:58 but it’s coming together as a community
0:55:00 and benefiting directly.
0:55:02 So that’s what data co-ops are.
0:55:06 But a data co-op doesn’t exactly solve the problem
0:55:09 of all my breadcrumbs on social media and Apple
0:55:11 and all the other stuff, right?
0:55:14 This is for a very specific set of data.
0:55:17 – Agreed, so it’s not necessarily a specific set of data.
0:55:19 You could imagine in the European Union
0:55:21 where you’re allowed to pull your data,
0:55:24 you could have a data co-op of people
0:55:26 who just pull together their Facebook data
0:55:27 and now they go to Facebook and say,
0:55:31 “Hey, look, we’re all gonna leave if there’s no way,
0:55:33 “if you’re not putting in, let’s say,
0:55:34 “technology like federated learning
0:55:36 “to protect our privacy a bit more.”
0:55:38 So I do think that there is also ways
0:55:40 in which people can come together
0:55:42 and get just a lot more negotiation power at the table.
0:55:44 Then if you go to Facebook and say,
0:55:48 “Hey, I’m Guy, I wanna force you to do something different,”
0:55:49 not sure if they’re gonna listen.
0:55:52 If you suddenly have 10 million people doing that,
0:55:54 you are in a better spot.
0:55:57 – Okay, I like this idea.
0:55:59 Okay, now I understand it better.
0:56:01 Thank you very much.
0:56:04 Listen, I like to end my podcast with one question
0:56:06 that I ask all the remarkable people
0:56:08 and clearly you’ve proven you’re remarkable
0:56:10 with this interview.
0:56:13 And that would be stepping aside, stepping back,
0:56:17 stepping up whatever direction you wanna use.
0:56:20 Like, what’s the most important piece of advice
0:56:24 you can give to people who wanna be remarkable?
0:56:26 – I think it’s don’t take yourself too seriously.
0:56:28 I think some humility and the way
0:56:32 that you approach yourself and others goes a long, long way.
0:56:33 – Alrighty.
0:56:35 This is a great episode.
0:56:36 Thank you so much.
0:56:38 And I hope I didn’t go too dark for you,
0:56:40 but this is a dark subject, actually.
0:56:41 – I do think it is.
0:56:44 And I think there’s a lot of room for improvement.
0:56:46 That’s why I care about the topic so much.
0:56:48 – Alrighty, so Sandra Matz.
0:56:50 Thank you very much for being a guest.
0:56:52 This has been Remarkable People.
0:56:55 I’m Guy Kawasaki, and I hope we helped you
0:56:57 be a little bit more remarkable today.
0:57:00 So my thanks to Matt as a Nizmer, the producer,
0:57:04 Tessa Nizmer, our researcher, Jeff C. and Shannon Hernandez,
0:57:05 who make it sound so great.
0:57:08 So this is the Remarkable People podcast.
0:57:12 Until next time, mahalo and aloha.
0:57:18 – This is Remarkable People.

What can your Google searches reveal about your personality? In this episode of Remarkable People, Guy Kawasaki explores the fascinating world of psychological targeting with Sandra Matz, Professor at Columbia Business School.

Matz shares eye-opening insights about how our digital footprints expose our deepest traits and behaviors. She reveals how companies predict our personalities through social media posts, explains the surprising link between language use and emotional states, and discusses why data privacy isn’t just about personal convenience—it’s about protecting ourselves in an uncertain future. Whether you’re concerned about data security or curious about what your online behavior reveals about you, this episode provides essential insights for navigating our increasingly digital world.

Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.

With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.

Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.

Episodes of Remarkable People organized by topic: https://bit.ly/rptopology

Listen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**

Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!

Thank you for your support; it helps the show!

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Leave a Comment