AI transcript
– Hi, I’m Guy Kawasaki.
This is the Remarkable People Podcast.
And as you well know,
we’re on a mission to make you remarkable.
And the way we do that is we bring you remarkable guests
who explain why they’re remarkable
and how they’re remarkable and their remarkable work.
And today’s special guest is Sandra Mott.
– She’s a professor at the Columbia Business School
and she’s gonna talk to us about psychological targeting
and changing mindset.
Congratulations.
Shipping a book is a big, big accomplishment.
Trust me, I know this firsthand, so.
– I know, it feels good when it’s out.
Even though I had a great time writing it.
So I think I probably enjoyed it a lot more
than what I was told by other authors.
So I already enjoyed the process.
– I have written 17 books
and I have told people 16 times
that I am not writing another book.
– Good luck with that one.
– Yeah, exactly.
– I’m just waiting for,
what would he be placing the order for number 18,
if that’s the case?
– Alrighty, so first of all, if you don’t mind,
let me just tell you something kind of off the wall
that your story about how you met your husband
at that speaking event,
that was the closest thing to porn
in a business book that I have ever read.
– And I spared you the details.
It’s actually a lot more to the story.
It’s a good one.
– I was reading that.
I said like, man, where is this going?
Like, is she gonna have this great lesson about how to,
you know, tell men to stick it and get out of my face?
And then I keep reading and it says,
oh, and the night went very, very well.
– What?
– It’s such a fun anecdote in my life,
how I met him.
So it was just like conference.
He was late for the people who have read the book
and I was like, what a jerk.
And I kind of had written him off.
And then as the night progressed
and I learned more about him by spying on him
as part of, in his place.
I was like, interesting guy.
I think I’m gonna give him a second chance.
And we’re married.
We have a kid now who’s one year old.
So it all worked out.
– And is he still meticulously neat?
Or, you know, was that just a demo?
And this is the real thing now.
– No, no, no.
So yeah, as part of the story,
it’s like, one of the first things I learned about him
is that I think he’s borderline OCD
’cause he just sorts everything.
It’s like the person who sorts his socks by color.
We just moved apartments,
which is with a one year old,
not the most fun thing to do in the world.
And they were like boxes everywhere.
You could barely walk around the apartment
and I just opened one of the drawers
and he had put the cutlery, like perfection.
I’m like, there’s a hundred thousand boxes in this place.
I can barely find anything for the baby,
but I’m really glad that you spent at least an hour
perfecting the organization of the cutlery.
So it’s, he’s still.
– I hope your new place has a dishwasher
so he can load the utensils in the tray.
– Tell me about it.
– That’s exactly what happens.
Yeah, I’m not allowed to touch the dishwasher anymore
’cause I don’t do it perfectly.
So you’re spot on, yeah.
(laughing)
– So you listeners out there, basically,
we have an expert in psychological targeting
and now she’s explaining how she had absolutely no targeting
in meeting her future husband, right?
– I think I nailed it from the beginning
and that his place, I looked at his,
him being put together and it gave me a pretty good
understanding I think of who he was.
I feel like I know what I signed up for.
– Okay, so this is proof that her theories work.
So I’ve already, you know, said this word,
psychological targeting twice.
So I would really like,
this is an easy question to start you off,
not that we got past the porn part of this podcast,
which is from a psychological targeting perspective.
What’s your analysis of the 2024 election?
– It’s a, I mean, interesting one.
So psychological targeting typically looks at individuals.
So it’s trying to see what can we learn
about the psychological makeup of people,
not by asking them questions,
but really observing what they do, right?
You can imagine in the analog world,
I might look at how someone treats other people,
whether they’re organized as my husband is.
And I think you can learn a lot by making these observations.
That’s true in the offline world.
That’s also true in the online world.
And I think if you just look at them,
presidential candidates, the way that they talk,
if Trump writes in all caps all the time
and doesn’t necessarily give it a second thought
before something comes out on Twitter,
I think that is an interesting glimpse
into what might be going on behind the scenes.
– And do you think that his campaign
did like really great psychological targeting
of the undecided in the middle?
Or, you know, like from an academic perspective,
as a case study, how would you say his campaign was run?
– I talk a lot about psychological targeting
’cause for me, it’s interesting to understand
how the data translates into something
that we can make sense of as humans, right?
So if I get access to all of your social media data,
an algorithm might be very good at understanding
your preferences and motivations
and then play into these preferences.
But I, as a user, as a human,
can’t really make sense of a million data points
at the same time.
If I translate it into something
that tells me whether you’re more impulsive
or more neurotic or more open-minded,
that just kind of goes a long way in saying,
okay, now I know which topics you might be interested in
in the context of an election
or how I might talk to you in a way that most resonates.
Now, politics is an interesting case, right?
‘Cause ideally a politician would go knock on every door,
have a conversation with you
about the stuff that you care about,
and obviously they don’t have the time.
So there’s, I think, a lot of potential
of using some of these tools to make politics better,
but obviously, I think the way that some of these tools
were introduced in the context of the 2016 election,
which really shows that the more dark side,
and I don’t know if they’re using
any of these tools on the campaign trail.
I think there are many ways in which you can use data
to drive engagement that’s not necessarily based
on predictions of your psychology at the individual level,
but certainly this idea that the more we know about people
and their motivations, their preferences,
dreams, fears, hopes, aspirations, you name it,
the easier it is for us to manipulate them.
– Well, in politics, as well as marketing,
which you bring up in your book,
I kind of got the feeling that what you’re saying is that,
you know, you psychologically target people
with different messages, but you could have the same product.
So in a sense, you’re saying that, you know,
yes, with the same product,
whether it’s Donald Trump or an iPhone or, you know,
whatever, a Prius, you can change your messaging
to make diverse people buy the product.
So did I get that right?
Or am I like imagining something
that’s kind of nefarious, actually?
– I think it depends on how you think about this, right?
‘Cause the fact that we talk to people
in different ways all the time.
So imagine a kid who wants the same thing.
The kid wants candy.
The kid knows exactly that they should talk to their mom
in one way and that they should talk to their dad
in a different way, right?
So the goal is exactly the same.
The goal is to get the candy,
but we’re so good as humans,
making sense of who’s on the other side,
understanding what makes them tick,
how do I best persuade them to buy something?
And the same is true, I think, in politics and marketing.
The more that we understand where someone is coming from
and where they want to be in the end,
the easier it is for us to sell a product, right?
So products have the benefit that it’s not just what you buy,
right?
A lot of the times we buy products
because they have like this meaning to us.
They help us express ourselves.
They serve a certain purpose.
And if we can figure out what’s the purpose of a camera
for a certain person, what’s the purpose of the iPhone
for a camera, why do people care about immigration?
A certain take, why do people care about climate change?
Is it because they’re concerned about their kids?
Is it because they’re concerned about their property?
Then I think we just have a much easier way
of tapping into some of these needs.
And whether that’s offline,
when we, again, talk to our three-year-old,
not in the same way that we talk to our boss and our spouse,
or whether that’s market is doing that at scale,
it’s really the more you understand about someone,
the more power you have over their behavior.
– So are you saying that at an extreme,
you could say to like a Republican person,
you know, the reason why we have to control the border
is because of physical security,
where to a liberal, you might say, you know,
there’s a different message,
but in both cases, you want to secure the border,
one for maybe job displacement, another for security.
I mean, it would be different,
but the same product in a sense.
– Yeah, or the same, yeah.
So a hundred percent, there’s all of this research,
and this is actually is not my own.
It’s very similar to psychological targeting.
And in that space, it’s usually called moral reframing
or moral framing.
So the idea that once I understand
your set of moral values, right?
So there’s a framework that kind of describes
these five moral values.
The way that we think about what’s right or wrong
in the world, that’s how I think about it myself.
And some of it is loyalty, fairness, care, purity,
and authority.
And what we know is that across the political spectrum,
so from liberal to conservative,
people place different emphasis on some of these values.
So if you take a liberal,
typically they care about care and fairness.
So if you make an argument about immigration again,
that’s, or climate change does matter,
that’s tapping into these values,
you’re more likely to convince someone who’s liberal.
Now, if you take something like loyalty,
authority, or purity, you’re more likely
to convince someone who’s more conservative.
And for me, the interesting part is that,
as humans, we’re so stuck with our own perspective, right?
If I as a liberal try to convince a conservative
that immigration might be a good thing,
I typically make that argument from my own perspective.
So I might be very much focused on fairness and care,
and it’s just not resonating with the other side,
’cause it’s not what they’re coming from.
And algorithms, because they don’t have an incentive,
they don’t necessarily have their own perspective
on the world that’s driven by ideology.
It’s oftentimes much easier for them to say,
I try and figure out what makes you care about the world,
what makes you think about what’s right or wrong in the world.
And now I’m gonna craft that argument along those lines.
And what’s interesting for me is that,
depending on how you construe it,
it can either be seen as manipulation.
So I’m trying to convince you of something
that you might not otherwise believe,
but it could also be construed as,
I’m really trying to understand
how you think about the world.
But I’m really trying to understand and engage with you
in a way that doesn’t necessarily come from my point of view,
but is trying to take your point of view.
So it really has, for me, these two sides.
– So I could say to a Republican is the reason why
you wanna support the H1B visa program
is because those immigrants have a history
of creating large companies
which will create more jobs for all of us,
which is a very different pitch.
– Yeah, and so in addition to the fact
that we can just tap into people’s psychology,
there’s also this research that I love.
I think it’s mostly done,
I think in the context of climate change,
but it’s looking at what do people think
the solutions to problems are,
and how does that relate to what they believe in anyway?
If I tell you, well, solving climate change
means reducing government influence,
it means reducing taxes.
Then suddenly Republicans are like,
“Oh my God, climate change is a big problem
“because the solutions are very much aligned
“with what I believe in anyway.”
If you tell that to Democrats,
they’re like, “Actually, it’s not such a big deal
“’cause I don’t really believe in the solution.”
So the way I think that we play with people’s psychology
and how they think about the world
and show up in the world just means
that oftentimes it gives us a lot of power
over how they think, feel, and behave.
– Another point that I hope I interpreted correctly
is like, you know, I’ve been trained so long
to understand the difference
between correlation and causation, right?
So like, if you wear a black mock turtleneck,
so did Steve Jobs.
So you should wear a black mock turtleneck
because you’ll be the next Steve Jobs.
Well, didn’t quite work out that way for Elizabeth Holmes,
but I think you take a different direction.
I just want to verify this.
So you don’t really discuss correlation versus causation.
In a sense, what you’re saying is that
there doesn’t need to be a causative relationship
if there is a predictive relationship that you can harness.
So I don’t know, if for some reason people,
we noticed a lot of people with iPhones by German cars,
well, that’s predictive.
I don’t have to understand why that’s true.
– Yeah, no, totally.
And I’ll give you an example that I think is interesting.
So one of the relationships that I still find fascinating
that we observe in the data that I don’t think
I would have intuited even as a psychologist
is the use of first person pronouns,
like people post on social media
about what’s going on in their life.
And I remember being at this conference,
it’s like a room full of psychologists
and this guy who was really like a leading figure,
Jamie Panner Baker in the space of natural language processing,
he comes up and he just asks the audience,
what do you think the use of first person pronouns?
So just using I, me, myself more often than other people,
what do you think this is related to?
And I remember all of us sitting at the table
and we’re like, oh, it’s gotta be narcissism.
If someone talks about themself constantly,
that’s probably a sign that someone is a bit more narcissistic
and self-centered than other people.
Turns out that it’s actually a sign of emotional distress.
So if you talk a lot about yourself,
that makes it more likely that you suffer
from something like depression, for example.
And now taking a step back, it actually makes sense, right?
If you think back to the last time
that you felt blue or sad or down,
you probably were not thinking about
how to fix the Southern border
or how to solve climate change.
What you were thinking about is like,
why am I feeling so bad?
Am I ever gonna get better?
What can I do to get better?
And this inner monologue that we have with ourselves
just creeps into the language that we use
as we express ourselves on these social platforms.
Now, the causal link is not entirely clear, right?
It could be that I’m just using
a lot more first person pronouns
because I have this inner monologue.
What you see in the language of people
who are suffering from emotional distress
is all of these physical symptoms.
So just being sick, having body aches.
And again, it’s not entirely clear
if maybe you’re having a hard time mentally
because you’re physically sick,
but also maybe you’re physically sick
’cause you’re having like a hard time
with the problems that you’re dealing with.
So on some level, I don’t even care that much, right?
If I’m just trying to understand and say,
is there someone who might be suffering
from something like depression
who’s currently having a hard time
regulating their emotions?
I don’t necessarily care if it’s going from
physical symptoms to mental health problems
or the other way.
What I care about is if I see these words popping up
or if I see some of these topics popping up,
that’s an increase in the likelihood
that someone is actually having a hard time right now.
Now, I think what is interesting is that
the more causal these explanations get
and these relationships get,
oftentimes they’re a lot more stable.
So it could be that if it’s like a causal mechanism,
and first of all, it allows us to understand
something about interventions,
like how do we actually then help people get better?
And they’re also oftentimes the ones that last for longer
because it’s not something just the fluke and the data
that maybe goes this direction or the other,
but it’s something that is really driving it
on a more fundamental level.
So you’re absolutely right in that,
oftentimes when we think of prediction,
we don’t need to understand which direction it goes in.
It’s still helpful to know if you think of interventions.
– So at a very simplistic level,
could you make the case to a pharmaceutical company?
You know, look at a person’s social media
and if the person is saying aye a lot,
sell them some lorazeprame or some anti-anxiety drugs,
is it that simple?
– I personally probably not go to the pharma companies
and make that proposition, but it is that simple.
And again, one of the points that I make in the book
that is super important to me
is that those are all predictions with a lot of error, right?
So it means that on average, if you use these words more,
you’re more likely to suffer from emotional distress.
That doesn’t mean that it’s terministic.
There’s a lot of error at the individual level.
So if I’m a pharma company and I wanna sell these products,
yeah, on average, I might do better
by targeting these people,
but it still means that we’re not always going to get it right.
And then on the other side, what is interesting for me
is if you think about it,
not from the perspective of a pharma company,
but from the perspective of an individual,
I think there’s ways in which we can acknowledge
the fact that it’s not always perfect, right?
You could have this early warning system
for people who know, for example,
that they have a history of mental health conditions
and they know that it’s really difficult
once they’re at this valley of depression to get out.
So they could have something on their phone
that just tracks their GPS record,
sees that they’re not leaving the house as much anymore,
less physical activity, more user first person pronouns.
And it almost has this early warning system.
It just puts a flag out and says, “It might be nothing.
It’s not a diagnostic tool. There’s a lot of error,
but we see that there’s some deviations from your baseline.
Why don’t you look into this?”
And for me, those are the interesting use cases
where we involve the individual,
acknowledging that there’s mistakes that we make
and the predictions,
but we’re using it to just help them
accomplish some of the goals that they have for themselves.
So speaking of interesting use cases,
would you do the audience a favor
and explain how you help the hotel chain
optimize their offering? ‘Cause I love that example.
– It was one of the first projects
and industry collaborations that we did
when I was still doing my PhD.
And there’s many reasons for why I actually liked the example.
But the idea was that we were approached by Hilton
and we worked with a PR company.
And the idea of Hilton was,
can we use something like psychological targeting?
So really tapping into people’s psychological motivations,
what makes them tick,
what makes them care about vacations and so on
to make their campaigns more engaging
and then also sell vacations
that really resonated with people.
And what I like about the example is that Hilton didn’t say,
well, we’re just gonna run a campaign on Facebook and Google
where we just passively predict people’s psychology
and then we try to sell them more stuff.
They turned it into this mutual two-way conversation
where they said, hey,
we wanna understand your traveler profile.
And for us to be able to do that,
if you connect with your social media profile,
we can run it through this algorithm
that actually we don’t control.
It’s the University of Cambridge is doing it.
We don’t even get to see the data.
But what we can do is we can spit out this traveler profile
and then make recommendations
that really tap into that profile.
So it was this campaign.
And you can imagine that doing that increased like engagement.
People were excited about sharing it with friends.
It was essentially good for the business bottom line.
But it also gave, I think, users the feeling
that it’s a genuine value proposition.
So there was a company that operated first of all
with consent ’cause it was all,
it’s up to you whether you wanna share the data or not.
Here’s like, how does this works behind the scenes?
Here’s what we give you in return.
So it was very transparent with the entire process.
And it was also transparent in terms of
here’s what we have to offer, right?
It’s by understanding your traveler profile.
We can just make your vacation a lot better.
So that’s one of the reasons why I like this example a lot.
– Now, just as a point of clarification,
you said the University of Cambridge, right?
– Yeah.
– Which has nothing to do with Facebook
and Cambridge associates, right?
– With Cambridge Analytica has nothing to do at all.
It was funny ’cause I get mixed up with them all the time.
Not surprising ’cause I got my PhD there on the same topic.
And there was like, I mean, the idea originated there, right?
The idea that we could take someone’s social media profile,
predict things about their psychology,
originated at Cambridge and that’s where it was taken from.
But we were involved and for me,
it’s almost like a point of pride
and like a point that made me think about the ethics a lot
is we helped the journalists break the story.
So when the journalists in, first in Switzerland,
were working on trying to see what happened
behind the scenes of Cambridge Analytica,
we just helped them understand the science.
How can you get all of the data?
How do you translate it into profile?
So yeah, not related to Cambridge Analytica in any way,
other than trying to take them down.
– Okay, so I misspoke.
I said Cambridge associates, not Analytica.
So if you were for Cambridge associates,
if there’s such a thing out there, I’d correct myself.
(laughing)
– I’m not sure.
– So listen, in the United States,
this is a very broad question,
but in the United States, who owns my data?
Me or the companies?
– Well, as you might have imagined,
it’s typically not you.
So the US is an interesting case
’cause it very much depends on the state that you live in.
So Europe, I would say has the strictest
data protection regulations.
So they very much try to operate on these principles
of transparency and control
and giving you at least the ability to request your own data
to delete it and so on and so forth.
In the US, California is the closest.
So California’s CCPA,
which is the Consumer Protection Act,
I can’t remember the exact name,
but this is like very close to the European Union principles
where you as a producer of data
and even though companies also can hold a copy,
you at least get to request your own data.
In most parts of the US,
the data that you generate,
you don’t even have a shot at getting it
because it sits with the companies
and you don’t even have the right to request it.
So I think we’re a very long way from this idea
that you’re not just the owner of the data,
but it’s also you can limit who else has control to add.
– So I live in California.
So you’re telling me there’s a way that I could go to Meta
or Apple or Google and say, I want my data
and I don’t want you selling it.
– That’s a great question.
So what you can do is you can request a copy of your data.
That’s one thing.
In many states, you can’t even do that.
You might generate a lot of medical data,
social media data and you,
even though you generated it,
you can’t even request a copy.
Now what you can do is you can go to Meta request a copy
and you can also request it to be deleted
or to be transferred for it somewhere else.
Now it’s still really hard to say,
I want to use a service and product.
And this is one of the things that I think makes it really
challenges for people to manage the data properly.
Because it’s a binary choice,
you can say, yeah, I want you to delete my data
and I’m not going to use the service anymore.
But then you also can’t be part of Facebook.
And yes, there are certain permissions
that you can play with.
What is public?
What is not public?
You can even play around with here’s some of the traces
that I don’t want you to use in marketing.
But typically, and this is true for,
I think still Meta and other companies,
it’s usually a binary choice.
Either you use our product with most of your data
being tracked and most of your data being commercialized
in a way that you might not always benefit from.
But you get to use the product for free
or you don’t use it at all.
And I think that’s the dichotomy
that’s really hard for the brain to deal with.
‘Cause if the trade-off that we have to make as humans
is service, convenience,
the ability to connect with other people in an easy way,
that’s what we’re going to choose over privacy
and maybe a risk of data breaches in the future
and maybe a risk of us not being able
to make our own choices.
So I think there’s now ways
in which you can somehow eliminate that trade-off.
‘Cause I think if that’s what we’re dealing with,
it’s an uphill battle.
– I need to go dark for a little bit here.
I read in your book about the example of Nazis.
And I just want to know like today,
could the Nazis go to Facebook, Apple and Google
and get enough information from the breadcrumbs
that we leave to track down where all the Jewish people are?
Would that be easy today?
– I think it would be incredibly easy.
And it’s one of these examples in the book
that I think is hard to process
and that’s why it’s so powerful.
I teach this class on the ethics of data
and there’s always a couple of people who say,
“Well, I don’t care about my privacy
’cause I have nothing to hide and the perks that I get,
they’re so great that I’m willing to give up my privacy.”
And what I’m trying to say is that it’s a risky gamble.
But first of all, it’s a very privileged position
’cause just because you don’t have to worry
about your data being out there,
doesn’t mean that that doesn’t apply to other people.
So I think in the US,
even the role versus way it’s a Supreme Court decision
to meddle with abortion rights,
I think overnight essentially made women across the US
realize, “Hey, my data being out there
in terms of the Google searches that I make,
my GPS records showing where I go,
me using some period tracking apps,
it’s incredibly intimate.”
And it could overnight be used totally against me.
So the example that you mentioned about Nazi Germany
is such a powerful one
’cause it shows that leadership can change overnight.
And I care so much about it
’cause I obviously grew up in Germany.
So it was a democracy in 1938.
And then the next year it wasn’t.
And what we know is that atrocities
within the Jewish community across Europe
totally dependent on whether religious affiliation
was part of the census.
So you can imagine if you have a country
where whether you’re Jewish or not
is written in the census,
all that Nazi Germany had to do is go to city hall,
get hold of that census data
and find the members of the Jewish community
made it incredibly easy to track them down.
But of course you don’t even need that census data anymore
’cause you can now have all of this data that’s out there
that allows us to make these predictions
about anything from political ideology,
sexual orientation, religious affiliation,
just based on what you talk about on Facebook.
And even you could make the argument
that maybe it’s the leaders of those companies
handing over the data voluntarily.
And I think we’ve even seen in the last couple of days
how there’s like this political shifts in leadership
when it comes to the big tech companies.
But even if they weren’t playing the game,
it would have been easy for a government to just replace
that C-suite executives with new ones
that are probably much more tolerant to
some of the requests that they have.
And I think it’s terrifying.
And I think it’s a good example
for why we should care about personal data.
– Okay, so what you’re saying is,
if I look at pictures of the inauguration
and I see Apple, Google, Meta, Amazon up on stage.
And so now the government can say,
you know, according to Apple,
you were in Austin and then you landed in an SFO.
And then according to your visa statement,
you know, you purchased this.
And according to your phone’s GPS,
you went to a Planned Parenthood
in San Francisco, California.
So we suspect you of going out of state
to getting an abortion.
So we’re opening up an investigation of you.
That’s all easy today.
– I think it’s very easy.
And again, I’m not saying that the leaders
of those big tech companies are sharing the data right now,
but it’s certainly possible.
And for me, there’s like this thing that I have in the book
is data is permanent and leadership is.
And right, so once your data is out there,
it’s almost impossible to get it back.
And you don’t know what’s gonna happen tomorrow.
Even if Zuckerberg is not willing to share the data,
there could be a completely new CEO tomorrow
who might be a lot more willing to do that.
So I think that the notion that we don’t have to worry
in the here and now about our data being out there
is just a very short-sighted notion.
And ideally we can find a system.
And I think there are ways now in which we can get
some of these perks and some of the benefits
and they come from using data without us necessarily having
to collect the data in a central server.
– Okay, so if I’m listening to this
and I’m scared stiff because, you know,
yes, you could look at what I do.
You could look at, I went to the synagogue
or I went to the, you know, temple or whatever.
So yeah, and you’re right.
Any of those people could replace and who knows.
So then what do I do?
– I do think that people should be to some extent scared.
So I’m really trying to not say that technology,
it’s all, like we’re all doomed because the data is out
there and technology can be used too many.
But I think there’s like many good use cases,
but I do think we should be changing the system
in a way that protects us from these abuses.
And the one thing that I describe in the book,
which I think we’re actually seeing a lot more of,
but just not that many people know of,
are these technologies that allow us to benefit from data
without necessarily running the risk
of a company collecting it centrally.
So what I mean is, and there’s a technology
that’s called federated learning.
And you can imagine the example that I give
is take medical data.
So if we wanna better understand disease
and we wanna find treatment that work for all of us,
not just the majority of people who usually
the pharma companies collect data of,
but like we wanna know, given my medical history,
given my genetic data, here’s what I should be doing
to make sure that I don’t get sick in the first place
or I can treat a disease that’s either rare
or not as easily understood,
we would all benefit from pooling data
and better understanding disease.
Now there’s a way in which you can say,
instead of me sending all of this data to a central server
and now this entity that collects all of the data,
they have to safeguard it.
Same way that Facebook is supposed to safeguard your data
against intrusion from the government.
Instead of having to sit in the central server,
what we can do is we can make use of the fact
that we all have supercomputers,
but that might be your smartphone.
Your smartphone is so much more powerful
than the computers that we used to launch rockets to space
a few decades ago.
So what this entity that’s trying to understand disease
could do is they could essentially send the intelligence
to my phone or ask questions from my data
and say, okay, here’s like how we’re tracking your symptoms.
Here’s what we know about your medical history,
but that data lives on my phone.
And all I’m doing is I’m sending intelligence
to the central entity to better understand the disease.
Apple Siri, for example, is trained that way.
So instead of Apple going in
and capturing all of your speech data
and centrally collecting it right now,
Apple would be one of these companies
who has to protect it now and tomorrow.
And they just send the model to your phone.
So they send Siri’s intelligence to your phone.
It listens to what you say.
It gets better at understanding, gets better at responding.
And instead of you sending the data,
it essentially just sends back a better model.
It learns, it updates, sends back the model to Siri.
And now everybody benefits ’cause we have a better speech.
And that’s a totally different system
’cause we don’t have to collect the data
in a central spot and then protected.
– But Sandra, I mean, the point that you just made
is that, yeah, Tim Cook may be saying that to us now.
We’re only sending you the model
and all your data is staying on your phone,
but tomorrow’s Apple CEO
could have a very different attitude, right?
So how do we know if they’re only still
sending the model right now?
– So I think it’s a great question.
And it’s funny that you mentioned Apple in that space
’cause I think they’re thinking about it this way.
So again, I would much rather have Tim say,
we’re only gonna locally process on your phone
and that even if they change it tomorrow,
what I’m mostly worried about
is that they collect my data today under Tim Cook
with the intention of making my experience better.
They collect it today and then tomorrow there’s a new CEO
’cause now that CEO can just go back into the existing data
and make all of these inferences that we talked about
that are very intrusive and we don’t want to be out there.
At least even if Apple decides tomorrow
to shift from that model to a new one,
that’s gonna be publicly out there.
So if that happens, at least people can start from scratch
and decide whether they still want to use Apple products
or not.
My main concern is that all the data that gets collected
and now leadership changes.
– Wow.
Okay, speaking of collected data,
you mentioned an example of a guy who applied to a store
and he took a personality test
and the personality test yielded let’s say undesirable traits.
And so he didn’t get that job
and that personality test stuck with him
and kind of hurt his employment in the future too.
So what’s the advice?
Don’t take the personality test or lie on the personality test.
What’s the guy supposed to do
if he’s required to take a personality test
to apply for a job?
– Yeah, and you’re really going to other dark places
but I think which I think is important
’cause for me, this example
and this one is not even using predictive technology, right?
So this one is a guy sitting down
and admitting that I think in his case,
he was like suffering from bipolar disorder.
So kind of sends the score on neuroticism
which is one of the personality traits
that kind of says how you regulate emotions through the roof.
And because he admitted to that,
he was essentially almost discarded from all of the jobs
that had like a customer facing interface
’cause companies were worried that he wouldn’t be dealing
and well with people who come and complain.
Now, the reason for why I think this example is important
is it just means who other people think we are
kind of closes some doors in our lives, right?
So sometimes it opens doors.
If someone thinks that you’re the most amazing person
and you absolutely deserve a loan,
maybe you have opportunities that other people don’t have
but oftentimes that the danger comes in
when someone thinks that we have certain traits
that then would lead to behavior that we don’t wanna see.
And now in the context of self-reported personality tests,
at least you have like some say over what that image is.
If you take it to an automated prediction of an algorithm
and coming back to this notion
that those algorithms are pretty good
at understanding of psychology,
but they’re certainly not perfect.
So now you suddenly live in a world
where someone make a prediction about you
based on the data that you generate,
you never even touched that prediction
’cause you don’t even get to see it.
They predict that you’re neurotic
and maybe they even get it wrong.
Maybe you’re one of the people where the algorithm
makes a mistake and gets it wrong.
And now you suddenly you’re excluded
from all of these opportunities for jobs, loans and so on.
And so I think for me, this notion that there’s someone
who passively tries to understand who you are
and then takes action that again, sometimes open doors,
sometimes it’s incredibly helpful
because maybe we connect you with mental health support
but at other times it might also close doors
in a way that you don’t even have insights to.
And for me, that’s the scary part
where I feel like we’re losing control
over essentially our lives.
– Wait, but are you saying that you should refuse
to take the personality test or you should lie?
– So in the case of the personality test,
first of all, it’s not a good practice.
So as a personality psychologist,
the way that we think of these personality tests
is that it shouldn’t be an exclusion criteria.
So I think that what they’re meant to do
is to say, here’s certain professions
that you might just be more suited for.
‘Cause if you’re an introvert
who kind of hates dealing with other people
and you’re constantly at the forefront of like a sales pitch,
you’re probably not gonna enjoy it as much.
They were never really meant to say,
you got a low score on conscientiousness
and we’re gonna exclude you.
It’s also very short-sighted
because technically what makes a company successful
and what makes team successful is to have many people
who think about the world differently.
So I have this recent research that’s still very preliminary,
but it’s looking at startups
and it just looks at how quickly do they manage
to hire people with all of these different traits.
So you can come together and you can say, well,
but I think this way and then you think this way
and we all bring a different perspective to the table.
And they’re usually more successful.
So this notion that companies just say,
here’s a trait that we don’t wanna see.
It is very short-sighted.
What we do know, and this is,
I promise coming back to your question,
is that saying that you don’t wanna respond
to a questionnaire is typically seen as the worst sign.
So there was this study where they looked at things
that people don’t like to admit to, right?
I think it was like stuff about health,
stuff about people’s sexual preferences
and saying, I don’t wanna answer the question is worst
and hitting the worst option on the menu.
So I absolutely agree that in that case,
the guy essentially didn’t have a shot,
but the problem is once it’s recorded,
he didn’t even get to take the test again
because the results were just shared
from company to company.
– So what I hear you say is lie.
– In this case, frankly, if it had been me,
I probably would have lied.
If I had known that this is,
if the company is making the mistake
of using the test in that way,
what I would recommend to people taking the test is,
yeah, like think about what the company wants to hear.
– Okay.
– Which is harder to do with data, by the way.
It’s funny ’cause oftentimes when we think of predictions
of our psychology based on our digital lives,
we think of social media and it’s always,
but I can to some extent manipulate
how I portray myself on social media.
That’s true for some of these explicit identity claims
that we think about and have control over.
There’s so many other traces.
Take your phone again.
The fact like my thing is that I’m not the most organized
person even though I’m German.
So I think I was expelled for a reason.
And I don’t organize my cutlery the way that my husband does.
And would I admit to this happily on a personality test
that like in the context of an assessment center,
probably not, right?
If someone gives me the question there
that says I make a mess of things,
would I be inclined to say I strongly agree?
Maybe not ’cause I understand
that’s probably not what they want to hear.
Now, if they tap into my data,
they see that my phone is constantly running out of battery,
which is like one of these strong predictors
of you not being super conscientious.
I constantly, I go to the deli on the corner five times a day
’cause I can’t even plan ahead for the next meal.
And I constantly run to the bus.
So if someone was tapping into my data,
they would understand 100%
that I’m not the most organized person.
So there’s something about this data world
and all of these traces that we generate,
which are in a way much harder to manipulate
than a question on a questionnaire.
– Well, and now people listening to this podcast
are thinking, how many times did I use the pronoun I?
Oh my God, I’m telling people that I have, you know,
depression and stuff.
– And again, it’s not deterministic.
So you might be using a lot of I
because something happened that you want to share.
It’s just like on average, it increases your likelihood.
– Up next on Remarkable People.
– If I wanted to get a portfolio, a data portfolio,
on most of the people,
I would be able to get it really cheaply.
And that’s something that, again,
I think most of us or all of us should be worried about.
And you do see use cases where policymakers
are actually waking up to this reality.
There was this case of a judge actually across the bridge
from here in New Jersey,
whose son was murdered by someone
that she litigated against in the past.
They found her data online from data brokers,
tracked her down, and in this case, killed her son.
(gentle music)
– Thank you to all our regular podcast listeners.
It’s our pleasure and honor to make the show for you.
If you find our show valuable,
please do us a favor and subscribe, rate, and review it.
Even better, forward it to a friend,
a big mahalo to you for doing this.
– Welcome back to Remarkable People with Guy Kawasaki.
So you had a great section about how,
by looking at what people have searched Google for,
you can tell a lot about a person
or at least draw conclusions.
So do you think prompts will have the same effect?
Like, you know, what I asked chat,
GPT is a very good window into what I am.
– I think so, right?
And I don’t necessarily, I think it’s prompts.
I think it’s questions that we have.
And if you think about Google,
there’s questions that I type into the Google search bar
that I wouldn’t feel comfortable asking my friends
or even sharing with my spouse.
So it’s like this very intimate window
into what is top of mind for us
that we might not feel comfortable sharing with others.
Yeah, so I was actually,
which I thought was so interesting
’cause I was part of this.
It was like a documentary about artistic
and what they did is they invited a person.
So they found a person online.
They looked at all of her Google searches
and then they recreated her life all the way from,
here’s the job that she took,
kind of suffered from anxiety
and the feeling that she wasn’t good enough
in the space that she was working in,
all the way to her becoming pregnant
and then having a miscarriage.
And they kind of recreated her life with an actress.
And then at some point bring in the real person
and the person watches the movie
and you can see how just over time,
she realizes just how intimate those Google searches are
’cause what the documentary team had created,
the life that they had recreated was so close
to her actual experience.
And again, just by looking at their data.
So for me, it was a nice way of showcasing
that it’s really not just this one data point
or a collection of data points,
but it’s a window into our lives and our psychology.
– And not to get too dark,
but the CEO of Google was on the stage, right?
So what happens when generative AI takes over
and the AI is drafting my email,
drafting my responses and to take an even further step,
what happens when it’s my agent answering for me?
Then is it still as predictive
or will the agent reflect who I really am
or it throws everything off
because it’s not guy answering anymore?
– So to me, that’s a super interesting question.
First of all, in a way like generative AI
democratized the entire process.
So when I started this research,
we had to get a data set that takes your digital traces.
Let’s say what you post on social media
and maybe a self-report of your personality.
And then we train a model that gets from social media
to your personality.
Now I can just ask chat GPT and say,
hey, here are guys Google searches.
Here’s what he bought on Amazon.
Here’s what we talked about on Facebook.
What do you think is his big five personality traits?
What do you think are his moral values?
What do you think is again,
like some of these very intimate traits
that we don’t want to share?
And it does a remarkable job.
It’s never been trained to do that,
but because it’s read the entire internet,
it has to understand so much about psychology.
And then obviously taking it to the next level,
it’s not just understanding,
but also replicating your behavior.
And the one thing that I’m most concerned about,
aside from like manipulative,
it’s just that it’s going to make us so boring.
If these language models,
they’re very good at coming up with an answer
that works reasonably well, like 80%.
But it’s very unlikely that it comes up with something
like super unique that we’ve never thought about,
that makes us different from other people.
So I think what happens is that we’re just going to see
more and more of who the AI believes we are.
‘Cause it’s essentially almost like the solidified system
of here’s who I think guy is,
and now I’m just optimizing.
And in the way that humans learn,
there’s this trade off between exploitation.
So that is doing the stuff that you know is good for you.
So if you think about restaurant choices,
you can either go to the same restaurant time and again,
because you know that you like it.
So there’s not going to be any surprise.
It’s going to be a good experience.
But the second part of human learning
and experience is the exploration part.
And it exposes you to risk,
because maybe you go to a restaurant
and it turns out to be not great
and you would have been better off
going to your typical choice.
But maybe you actually also stumble on a restaurant
that you love.
And for that, you had to take the risk
and explore something new.
And my worry with these AI systems
and most types of personalization
is that they very much focus on exploitation.
They take what you’ve done in the past,
who they think you are,
and they try to give you more of that.
But you don’t get like the fun parts of exploring.
It’s like Google Maps is amazing
at getting you from A to B most efficiently,
but you also never stumble upon these cute little coffee shops
that you didn’t know were there before
because you got lost.
And for me, that’s in a way the danger
of having these systems replaces.
Is that just gonna make us basic and boring?
– What if I ask the opposite question,
which is I want to help companies be more accurate
in predicting my choices, right?
So I wanna tell Google,
stop sending me world wrestling news and Google news
and stop telling me about the Pittsburgh Steelers
and stop sending me ads for trucks
’cause I don’t want a truck and I don’t want a Tesla.
And I wanna make a case that what if you want companies
to understand you better, then what do you do?
– First of all, I think it should be an option, right?
So there should be two different modes for you guys
that says right now I’m trying to explore.
Right now I just wanna see something
that’s different to what I typically want.
But also now I’m in this mode
where I just want you to know exactly what I’m looking for.
And I don’t want you to send me the camera
even though I was not interested in the camera
for the last three weeks.
So in this case, I think what companies can do,
which is what they I think oftentimes don’t do enough of.
So it’s like having a conversation with you
that kind of allows you to interact with the profile.
Most of the time they just passively say,
here’s who I think guy is
and now we’re optimizing for their profile.
But if they get it wrong,
there’s no way for you to say no, no, no,
why don’t you just take out this prediction
that you’ve made ’cause it’s not accurate,
which is annoying for me ’cause now as you said,
you get like ads for wrestling
that you might not be interested in at all.
And it’s also bad for business
’cause now they’re optimizing for something
that is not who you are.
So I think first of all, give people the choice
whether they wanna be in an explorer mode
or an exploitation mode.
And then second part is even within the exploitation mode
where we’re just trying to optimize
for who we think you are,
give people the choice and say, no, you’re wrong.
I wanna correct that.
It’s good for the user and it’s good for the company.
– Well, if anybody out there is listening
and embraces this idea,
I suggest you not call it exploitation mode,
maybe optimization mode might be a more pleasant marketing.
– Personalization mode, yeah, that’s true, that’s true.
– Personalization mode, yeah.
Okay, so some three short,
tactical and practical questions.
So knowing all that you know,
and I think we went dark a few times
and show people the risk here.
So do you use email, messages, WhatsApp or signal?
What do you use personally?
– I mostly use WhatsApp.
First of all, it’s encrypted,
but then it also just what everybody in Europe uses.
So I wouldn’t even give myself any credit for that.
And it’s funny ’cause I think the fact
that I’ve become a lot more pessimistic over the years
has to do with my own behavior.
So I know that we can be tracked all the time
and I still mindlessly say yes to all of the permissions
and so on and so forth.
So I think we just don’t have the time
and the mental capacity to do it all by ourselves.
There’s only 24/7 in a day.
And I’d much rather spend a meal with my family
than going through all the terms and conditions and permission.
So I think if it’s just up to us,
it’s an unfair battle that we don’t stand a chance.
– And why, of all people in the world,
would you not default to signal
because it’s encrypted both the message
and the meta information?
– It’s mostly because not that many of my friends
are using it.
So again, in this case, it would be a trade-off
between I get protected more,
but there’s also like a downside
because I can’t reach out to the people
that I want to reach out.
And I feel like if that’s the trade-off,
the brains of most people will gravitate to,
I’m just gonna get all of the convenience that I want.
– Okay, second short question is,
when you use social media,
do you use it like read only and you don’t post,
you don’t comment and don’t like
or like are you all in on social media
and dropping breadcrumbs all over the place?
– I think even if you don’t use social media,
even if I was completely absent from social media,
I would still be generating breadcrumbs all the time
’cause I have a credit card and I have a smartphone
and there’s facial recognition.
I just don’t want people to think
that social media is the only way to produce traces.
Now I don’t actively use it as much,
but not because I know that I shouldn’t be doing it.
It’s just because it’s so much work.
I feel like I much rather have interesting offline conversations
that thinking about what I should post on X
and some of the other ones.
So it’s a different reason than worries about privacy.
– Okay, now is the logic that, yes,
Google knows something, Apple knows something,
Meta knows something, X knows something,
everybody knows something,
but nobody knows everything.
So the fact that it’s all sort of siloed
keeps me safe or is that a delusion?
– I think it’s probably a delusion.
So my argument would be that they have most of these traces.
So if you think of applications, again,
like when you download Facebook here,
it asks you to tap into your GPS records,
into your microphone, into your photo gallery.
You use Facebook to log into most of the services
that you’re using elsewhere.
So they have a really holistic picture
of what your life looks like across all of these dimensions.
And by the way, they also have it for users
who don’t use Facebook because it’s so cheap now
to buy these data points from data brokers
that if I wanted to get a portfolio, a data portfolio,
on most of the people,
I would be able to get it really cheaply.
And that’s something that, again,
I think most of us or all of us should be worried about.
And you do see use cases where policymakers
are actually waking up to this reality.
There was this case of a judge actually across the bridge
from here in New Jersey, whose son was murdered
by someone that she litigated against in the past.
They found her data online from data brokers,
tracked her down, and in this case, killed her son,
Biden signed something into a fact
that now protects judges from having their data out there
with data brokers, which makes me think
if we do this for judges and we’re concerned
that we can easily buy data about judges,
why not protect everybody else?
I think there’s a good point to be made that data on us
is so cheap and available from different sources
that even if you don’t use social media,
it’s easy to get your hands on.
– You introduced the concept in the last part of your book,
which I don’t quite understand.
So please explain what a data co-op does.
– Yeah, it’s one of my favorite parts of the book, actually,
’cause it thinks of how do you help people
make the most of their data, right?
So we’ve talked a lot about the dark sides,
and I think regulation is needed
if we wanna try to prevent the most egregious abuses,
but it doesn’t really give you a way of,
first of all, managing your data in the absence of regulation,
and it also doesn’t give you a way
to make the most of it in a positive way.
So data co-ops are essentially these member-owned entities
that help people who have a shared interest in using data
to both protect it and make the most of it.
So my favorite example is one in Switzerland
that’s called MyData, and they’re focused on the medical space.
So one of the applications that they have
is working with MS patients.
So patients who suffer from multiple sclerosis,
which is one of these diseases
that, again, is so poorly understood
’cause it’s determined by genetics,
and it’s determined by your medical history,
by your environment, and what they do
is they have a co-op of people.
So patients who suffer from MS and healthy controls
that own the data together.
So it’s a little bit similar to the financial space
where you oftentimes have entities
that have fiduciary responsibilities.
So they’re legally obligated to act in your best interest.
So data co-ops are entities that are owned by the members.
They are legally obligated to act in their best interest,
and now you can imagine, in the case of the MS patients,
they can pool the data,
they can learn something about the disease,
and they can also then, in this case,
work with doctors of the patients
and say, here’s something that we’ve learned from the data.
This treatment might be particularly promising
for a patient at this stage with these symptoms.
Why don’t you try this?
So the people benefit immediately,
and also because they’re now together,
they can hire experts that help them manage their data,
think about, well, here’s maybe some of the companies
that we wanna share the data with,
but maybe we do it in a secure place
that doesn’t require us to send all of the data.
So these data co-ops, for me,
is just like a new form of data governance
that gives us, I think of it as allies.
So if we have a way that we wanna use data,
we need other people with a similar goal
so that we make data, first of all, more valuable,
’cause if I have my data, my medical history
and my genetic data as an MS patients,
doesn’t help me at all, I need these other people,
but it’s not coming together as a pharma company
that’s grabbing all of this data and then making profits,
but it’s coming together as a community
and benefiting directly.
So that’s what data co-ops are.
But a data co-op doesn’t exactly solve the problem
of all my breadcrumbs on social media and Apple
and all the other stuff, right?
This is for a very specific set of data.
– Agreed, so it’s not necessarily a specific set of data.
You could imagine in the European Union
where you’re allowed to pull your data,
you could have a data co-op of people
who just pull together their Facebook data
and now they go to Facebook and say,
“Hey, look, we’re all gonna leave if there’s no way,
“if you’re not putting in, let’s say,
“technology like federated learning
“to protect our privacy a bit more.”
So I do think that there is also ways
in which people can come together
and get just a lot more negotiation power at the table.
Then if you go to Facebook and say,
“Hey, I’m Guy, I wanna force you to do something different,”
not sure if they’re gonna listen.
If you suddenly have 10 million people doing that,
you are in a better spot.
– Okay, I like this idea.
Okay, now I understand it better.
Thank you very much.
Listen, I like to end my podcast with one question
that I ask all the remarkable people
and clearly you’ve proven you’re remarkable
with this interview.
And that would be stepping aside, stepping back,
stepping up whatever direction you wanna use.
Like, what’s the most important piece of advice
you can give to people who wanna be remarkable?
– I think it’s don’t take yourself too seriously.
I think some humility and the way
that you approach yourself and others goes a long, long way.
– Alrighty.
This is a great episode.
Thank you so much.
And I hope I didn’t go too dark for you,
but this is a dark subject, actually.
– I do think it is.
And I think there’s a lot of room for improvement.
That’s why I care about the topic so much.
– Alrighty, so Sandra Matz.
Thank you very much for being a guest.
This has been Remarkable People.
I’m Guy Kawasaki, and I hope we helped you
be a little bit more remarkable today.
So my thanks to Matt as a Nizmer, the producer,
Tessa Nizmer, our researcher, Jeff C. and Shannon Hernandez,
who make it sound so great.
So this is the Remarkable People podcast.
Until next time, mahalo and aloha.
– This is Remarkable People.
What can your Google searches reveal about your personality? In this episode of Remarkable People, Guy Kawasaki explores the fascinating world of psychological targeting with Sandra Matz, Professor at Columbia Business School.
Matz shares eye-opening insights about how our digital footprints expose our deepest traits and behaviors. She reveals how companies predict our personalities through social media posts, explains the surprising link between language use and emotional states, and discusses why data privacy isn’t just about personal convenience—it’s about protecting ourselves in an uncertain future. Whether you’re concerned about data security or curious about what your online behavior reveals about you, this episode provides essential insights for navigating our increasingly digital world.
—
Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.
With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.
Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.
Episodes of Remarkable People organized by topic: https://bit.ly/rptopology
Listen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**
Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Thank you for your support; it helps the show!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.