Mike Caulfield: Verified Methodology for Fighting Misinformation

AI transcript
0:00:12 I’m Guy Kaosaki, and this is Remarkable People.
0:00:14 We’re on a mission to make you remarkable.
0:00:18 We have two ways to help you be remarkable right now.
0:00:24 One is our book, Think Remarkable, Nine Paths to Transform Your Life and Make a Difference.
0:00:26 I hope you’ll read it.
0:00:32 The other path is today’s podcast and we have with us a guest named Mike Coffield.
0:00:36 He is a research scientist from the University of Washington.
0:00:39 He works at the Center for an Informed Public.
0:00:46 Basically, Mike is renowned for his SIFT methodology, S-I-F-T.
0:00:52 This is a crucial tool in the fight against online misinformation, and it empowers educators
0:00:56 and learners to critically assess online content.
0:01:00 Let me explain the acronym SIFT.
0:01:09 S is for Stop, I is for Investigate the Source, F is for Find Trusted Coverage, and T is traced
0:01:11 back to the original.
0:01:18 In November 2023, Mike Coffield verified how to think straight, get dupeless, and make
0:01:24 better decisions about what to believe online with another remarkable people guest, Sam
0:01:25 Weinberg.
0:01:32 Mike’s dedication to digital literacy has not only earned him the 2017 Merlot Award,
0:01:38 but also recognition from top media sources such as The New York Times, NPR, and The Wall
0:01:39 Street Journal.
0:01:43 So, let’s welcome Mike Coffield to Remarkable People.
0:01:48 We’re going to learn how to SIFT and figure out the truth of what we see and read and
0:01:50 hear online.
0:01:55 I’m Guy Kawasaki, this is Remarkable People, and here we go.
0:02:01 Are you drinking liquid death?
0:02:05 No, I mean, yeah, I guess, Dr. Pepper, is that liquid death?
0:02:11 No, liquid death is a brand, I’m not talking about the carcinogens in Dr. Pepper.
0:02:15 Okay, yeah, yeah, I mean, at some level, probably liquid death.
0:02:20 All right, let’s get serious here because we have to save democracy.
0:02:25 So first question, how do you verify news stories?
0:02:29 So let me just set up a little frame about what we talk about when we talk about verifying,
0:02:34 because I think sometimes people have a different conception of what you’re doing.
0:02:38 Generally, we’re looking at a situation where someone has seen something on the internet
0:02:42 that might be a news story, that might be an article, that might be a website, whatever
0:02:43 it is.
0:02:46 If you’ve seen something on the internet, they’ve had a reaction to it.
0:02:50 Normally that reaction is one of two things, either this is absolutely evidence of everything
0:02:55 I thought I believed, and I believe this is right and so forth, or it is, oh, this is nonsense,
0:03:01 this is foolishness, and so the question becomes, is the thing what you think it is?
0:03:05 And this is a question where we focus on the book instead of, is it true or false?
0:03:09 We accept that you’ve come to something, you’ve already had a reaction.
0:03:13 By the time you’re checking, something’s already happened, you already have an impression.
0:03:18 The question isn’t what is this thing, the question is whether your impression was correct
0:03:19 or whether your impression was wrong.
0:03:24 And so when we talk about verifying news sources, what we’re saying here is you see something
0:03:30 on the web and you react to it, maybe because it’s called the Mississippi Ranger, and you’re
0:03:32 like, oh, well, this is a local paper in Mississippi.
0:03:35 I hope there’s not a paper called the Mississippi Ranger.
0:03:38 If there is, it’s a made-up name, like no libel.
0:03:40 Wait, I gotta go with domain right now.
0:03:43 I should have written down some fake names I could use.
0:03:46 But you see something called the Mississippi Ranger and you’re like, oh, this is just a
0:03:49 local paper in Mississippi covering an issue of something that happened there.
0:03:52 And so the question is, is it right?
0:03:54 If that’s your impression of it, was your impression correct?
0:03:58 And what we suggest on something like that is going to Wikipedia.
0:04:03 So if it’s a paper of any size, and actually I worked on a project getting local newspapers
0:04:08 on Wikipedia for a while and coordinating that, if it’s a paper of any size, it’ll have
0:04:09 a Wikipedia page.
0:04:14 You go there, and if there’s a paper of this name, if there’s not, it doesn’t necessarily
0:04:18 mean it’s not a paper, but you might want to find something else.
0:04:20 It might not be your best first stop.
0:04:25 Alternatively, you might go and you might find that the Mississippi Ranger is one of
0:04:31 a set of papers that’s run by a political consultant who runs something we call a pink
0:04:32 slime network.
0:04:36 I don’t know if you heard this term, but it’s a network of a lot of things that look
0:04:41 like they’re news producing sites, but really they’re auto-generated out of this stuff,
0:04:44 and usually for some sort of propaganda end.
0:04:48 It could turn out that’s actually being run by a political consultant.
0:04:52 And so when you say verify a source, part of what we’re saying is, okay, well, if I thought
0:04:56 I was getting this from a local news source, and that was behind my impression that, oh,
0:05:00 this is really useful evidence for what I believe or don’t believe, and it turns out,
0:05:04 no, actually, this is being run by a political consultant, or no, this is just a spam site,
0:05:08 or no one’s ever heard of this, then it’s maybe not as useful to me.
0:05:12 And the way you do that, the way you get that context is with the new source, when you’re
0:05:15 checking the source, is to start with Wikipedia.
0:05:20 If you can’t find something on Wikipedia, type in the name of the source into, let you
0:05:23 use something like Google News, see if it comes up in a Google News, and maybe type something
0:05:28 like funding, coo funds this location, basic sorts of things that would give you some context
0:05:29 on that source.
0:05:38 Okay, but let’s, I was on the board of trustees of Wikipedia, and nobody believes in Wikipedia
0:05:40 more than me, all right?
0:05:47 But what if somebody says in response to you saying, check Wikipedia, oh, anybody can change
0:05:54 anything on Wikipedia, why would you use Wikipedia as your reference, when anybody can say anything?
0:06:00 Well, as you know, because A, that’s not really true on Wikipedia.
0:06:07 That’s true of Wikipedia in 2006, I hope I’m not being unfair here, but I was on Wikipedia
0:06:11 in 2006, it was true in 2006, you could get on there, could say a lot of things, those
0:06:14 things would not be noticed for long periods of time.
0:06:22 So the first thing is Wikipedia in 2023 is not Wikipedia of 2006 or 2008, there’s just
0:06:28 been a lot of effort on Wikipedia to build various bots, various things that look for
0:06:35 things that don’t have citation, vandalism, unsourced changes, new users coming in from
0:06:41 unidentified IPs that are strangely editing, a lot of pages with PR content, that sort
0:06:42 of thing.
0:06:43 So there’s that issue there.
0:06:48 It is true on some smaller pages, you can get away with this and that, in Wikipedia for
0:06:51 a little bit of time, that’s not impossible.
0:06:56 But in general, if it’s a good Wikipedia page, you don’t have to trust the Wikipedia page,
0:07:01 because you’re going to come to the Wikipedia page, anything that is contested, could potentially
0:07:06 be contested is going to have a link, a footnote to it, and you’re going to be able to use
0:07:08 those links to verify it.
0:07:12 And the thing too is, it doesn’t have to be perfect, Wikipedia doesn’t have to have an
0:07:17 answer to every single one of these, because this is the big thing, the web is abundant,
0:07:18 right?
0:07:23 If you came to the Mississippi Ranger, and this is the source I want to use, and it turns
0:07:26 out you can’t find any information on the Mississippi Ranger, it’s not like you’re out
0:07:31 of luck, it’s the internet, you can go and find a source that you can actually find information
0:07:32 on.
0:07:38 And so it doesn’t have to be perfect, because the question is not, is this specific source,
0:07:41 the perfect source for what I want to do?
0:07:45 Is this source sufficient for what I want to do, or should I move on and find something
0:07:46 else?
0:07:48 And you can move on, find something else where there’s a better Wikipedia page.
0:07:56 Do you think the day will ever come when you’re asked this question, and your answer is, check
0:08:03 chat GPT, or Claude, or Bard, or Gemini, or anything.
0:08:08 You obviously said check Wikipedia, you did not say check LLM.
0:08:13 Yeah, right now I wouldn’t say check in LLM, there’s a couple reasons for that.
0:08:17 They are improving, but a lot of them, the information’s out of date, some of them have
0:08:19 gotten better with that.
0:08:27 They tend to do really well with structure, they don’t always do as well with sort of
0:08:32 granular facts, so there’s masters of style and structure.
0:08:36 But the granular facts have been a persistent problem, and interestingly, there were some
0:08:40 predictions that a lot of that would be ironed out by now, but there’s a particular thing
0:08:45 in an LLM that people may not realize, which is that those algorithms are set to have a
0:08:49 little bit of flexibility in them, like a little bit of play, otherwise you’d always
0:08:54 get exactly the same prediction for every set of words in front, they’d never get that
0:08:56 sort of real generative quality.
0:09:01 And that little bit of play that you have to put in there so that it can do some of these
0:09:05 things is also the thing that is giving you what people call these hallucinations.
0:09:10 It’s going to be a little tougher to work that out than I think people realize because
0:09:17 the same amount, the same thing that’s giving you some of the sort of appearance of creativity
0:09:22 in the LLM, the thing that people associate with the generativity, is the same thing on
0:09:26 the other side that’s sometimes going too far in creating these hallucinations.
0:09:29 I’m not saying that it’ll never work out, the jury’s out.
0:09:38 For the moment, our recommendation in it is that people tend to think that LLMs are great
0:09:42 tools for novices.
0:09:48 We actually think they tend to be better tools for either experts or sometimes you find cases
0:09:53 where a person is an expert in one field and has some moderate knowledge in another in
0:09:58 to do that, but because you’re looking at the output and you have to evaluate it, novices
0:10:03 can get overwhelmed with what they would need to check.
0:10:12 Since we’re on the topic of Wikipedia and LLMs, do you think that LLMs are an existential
0:10:18 risk to Wikipedia because people will go to their favorite LLM and just ask a question
0:10:22 where they may have gone to Wikipedia before.
0:10:28 I understand that Wikipedia would be one of the best sources for LLMs, but what happens
0:10:34 if people don’t go to Wikipedia anymore and they just go to LLMs, it’s the same threat
0:10:36 for search engines?
0:10:41 Yeah, and it’s even worse than that because the other worry is what if people that want
0:10:47 to stack up Wikipedia credibility, Wikipedia clout, what if they start just using LLMs
0:10:53 to write their parts of the Wikipedia page and now you’re introducing a lot of these
0:10:56 errors potentially into Wikipedia via the LLMs.
0:10:59 So even if you go to Wikipedia, you’re getting LLMs.
0:11:04 And I know that Wikipedia is working on some ways to do some detection and so forth and
0:11:10 some policies about what you can and can do, but yeah, I mean, it’s an issue.
0:11:13 Are LLMs an existential threat to Wikipedia?
0:11:20 I think existential threats don’t have to be successful, they just have to threaten your
0:11:21 existence.
0:11:26 You can have an existential threat that turns out not to result in the death of something.
0:11:28 And I think in that way, yeah, I think so.
0:11:33 I think there’s a future where LLMs could do that.
0:11:39 That would be really sad because, of course, a lot of the productive capabilities of LLMs
0:11:44 comes from a lot of people putting in time and writing things like Wikipedia.
0:11:48 It’s a little bit of, I forget, what’s the opposite of a parasitic system, a symbiotic?
0:11:52 You can either have a parasitic system or a symbiotic system, right?
0:11:55 And there’s one future in which LLMs are parasitic.
0:12:00 They take all the stuff that people have worked on, provided value, they suck out that value,
0:12:05 they spit it back at the user, they erode the business model for these other things and
0:12:08 kind of just suck their host dry.
0:12:13 And then there’s a symbiotic future, which is a symbiote is like parasite, but they live
0:12:16 in a way that benefits their host organism.
0:12:21 And that symbiotic future, I think, could be one where we figure out how to make these
0:12:24 things work together, play to their individual strengths.
0:12:28 And we teach people, like when you want to go to one and when you want to go to the other.
0:12:30 But I think we need that symbiotic future.
0:12:35 And I think part of that symbiotic future is people figuring out when it’s best to consult
0:12:39 something like ChatGBT and when it’s best to consult Wikipedia.
0:12:46 Okay, because LLM vis-a-vis search engines, I know I search on Google a whole lot less
0:12:48 these days.
0:12:54 You know, when I want to question like, how do I add a HP printer to my Macintosh network?
0:12:59 I used to go to Google for that and get 475,000 links.
0:13:05 But now I go to Perplexity, which is the world’s stupidest name for an LLM, but I go to one
0:13:09 of these things and it gives me the answer, not links, right?
0:13:13 And this relates to something in our book, which is that what most people are looking
0:13:19 for is a summary thing, like your average informational need is a summary, because the
0:13:24 number of things in which you’re not an expert far exceeds the number of things in which you
0:13:25 are an expert.
0:13:29 And the business model for summary is not great.
0:13:31 It hasn’t been great for a while.
0:13:34 The business model is in making an argument.
0:13:37 You take all your facts, and maybe you do a little bit of summarization, but you make
0:13:38 an argument.
0:13:40 You say, “This is the way things should be,” and you do that.
0:13:44 Or the business model is in selling people things, like, “This will solve your problem,
0:13:46 not I want to do a summary.”
0:13:51 There is this problem that AI addresses for some people, which is that people go to the
0:13:56 internet and they quite rightly want a summary of something, and instead of getting a summary,
0:14:02 they get a list of a lot of people making arguments for something instead of, “I just want to know
0:14:06 what the thing is,” or a lot of people selling something, saying, “Hey, that problem you
0:14:10 have, here’s your solution,” and people get frustrated with it.
0:14:15 And so part of what we have to do, I think, is, and this is outside the scope of the book,
0:14:19 but we have to come up with a business model for a summary to get people the answers they
0:14:23 want that is not, again, it’s not a parasitic business model where the summary is coming
0:14:27 from a lot of work that people did, but not necessarily giving back or supporting the people
0:14:28 that did the work.
0:14:34 This is starting to appear often to solving the problems of the world, but that’s okay.
0:14:42 I’m up for it.
0:14:47 In my simplistic world, if I did a search, how do I add an HP printer to my Macintosh
0:14:48 network?
0:14:49 Yeah.
0:14:53 Listen, just like on a search engine, if the right column has ads for toner cartridges
0:14:56 and HP printers, I’m fine with that.
0:14:57 I don’t care.
0:14:58 Right, right.
0:15:04 I want you to answer this question, because I don’t know how to answer it, which is,
0:15:13 how do you tell if a large language model is making shit up and having hallucinations?
0:15:14 How do I tell?
0:15:19 Generally, I consulted for something that I have some idea about already, and that would
0:15:20 be the sorts of things.
0:15:25 And very often, I’m checking and understanding that I already have when I’m going there.
0:15:30 Now that said, you’re talking about this different sorts of knowledge, and they’re
0:15:31 not the same.
0:15:33 You’re talking, for example, about procedural knowledge.
0:15:34 You want to set something up.
0:15:39 There’s one nice thing about procedural knowledge, which is assuming you’re not operating a nuclear
0:15:45 power plant, you try the procedure if it works, and then if it works, great.
0:15:46 That’s confirmation.
0:15:49 If it doesn’t, then you find something else.
0:15:52 So for a lot of procedural knowledge, assuming you’re not working with dangerous chemicals
0:15:56 or something like that, yeah, I could see someone doing that.
0:15:57 They want to know how to do this.
0:16:01 And as a matter of fact, the classic example of that is LLMs are really good at writing
0:16:02 code.
0:16:09 And if you’re trying to write a bunch of computer code to reorganize files on your drive by
0:16:15 date and rename them or something like that, make a copy of that before you do it.
0:16:16 The LLM can write that.
0:16:17 You can run it.
0:16:20 You have some confirmation that the information you got back was good.
0:16:24 The problem comes, and this is where our book fits into, and I’m glad you mentioned this
0:16:27 because it was another interview I did, and someone was really obsessed with why do we
0:16:33 even need this stuff if I’m just looking up how to set up YouTube TV on my computer or
0:16:34 something like that.
0:16:38 And yeah, for that set of things, it’s not really a book about that.
0:16:43 But there’s another set of things where you can’t directly verify the knowledge that you’re
0:16:44 given.
0:16:45 And that’s different.
0:16:48 And so someone says, look, the federal deaths in this country are at all time high.
0:16:50 We need a federal intervention for this.
0:16:53 They show you a chart, and maybe that’s true.
0:16:54 In this case, it probably is true.
0:16:55 They are, right?
0:17:00 But there’s no way for you to directly go out into the world and verify whether that information
0:17:01 was true or false.
0:17:05 And for that sort of thing, I wouldn’t trust an LLM unless you really know the subject.
0:17:10 I would, in that case, try to find something that was directly written by a human, particularly
0:17:13 a human that has reputational stakes.
0:17:17 Someone who, if they don’t take care with the truth, is likely to pay at least some
0:17:21 reputational consequence, because that’s how we build up trust, is we know, look, if this
0:17:27 person gets it wrong, at the very least, it’s an embarrassing next day at work, which is
0:17:29 often enough to have people get things right.
0:17:37 Therefore, I could make the argument that the fact that LLMs have hallucinations means
0:17:40 that Wikipedia still has a place in this world.
0:17:43 Oh, yeah, I would absolutely agree with that.
0:17:47 And one of the things about Wikipedia, and the way it’s structured, is editors have stakes.
0:17:51 I don’t think people understand this, but some people play Wikipedia like a video game,
0:17:57 in the sense that there’s a dashboard there, and when their changes get reverted, it’s
0:17:58 painful.
0:18:02 The flip side of that is sometimes you get these wars, which get very emotional about
0:18:03 it.
0:18:05 But people that work on Wikipedia have reputational stakes.
0:18:10 They have a dashboard that shows how many times they created an article that stayed up,
0:18:14 how many times they contributed an edit to stay there, how many times it was reversed.
0:18:18 And these things keep the majority of people in Wikipedia on track.
0:18:23 ChatGPT, like the company ChatGPT has stakes, but the actual thing producing the thing doesn’t
0:18:24 have any stakes.
0:18:25 No.
0:18:26 And that’s a big difference.
0:18:28 No, not at all.
0:18:29 Yeah.
0:18:36 Okay, so let’s say that I go to a website, and it’s got this .org domain, and I go to the
0:18:43 About page, and it talks about ending climate change and making America great again.
0:18:45 That’s not a good phrase.
0:18:50 It looks like it’s a legitimate .org, .edu, something.
0:18:56 And so what tricks do people use to make a site look credible?
0:19:03 And in your case, it’s owned by a political consultant who’s trying to foster anti-union
0:19:05 voting or something.
0:19:07 Kill the minimum wage or something like that.
0:19:08 Yeah.
0:19:09 Yeah.
0:19:14 So here’s what, here’s the core of what most people do, is we talk in our book about cheap
0:19:17 signals and expensive signals.
0:19:21 And an expensive signal is like your reputation, like it takes a lifetime to build a reputation.
0:19:23 You’re very careful about your reputation.
0:19:27 You have a history with people that you can maybe find online over years.
0:19:32 If you’re a reporter, you can look at the articles you wrote 20 years ago in Washington
0:19:35 Post, in the articles you wrote yesterday at the Guardian.
0:19:36 So there’s reputation.
0:19:38 That’s an expensive signal.
0:19:42 And then there’s what we call cheap signals, and cheap signals are anything that gives
0:19:50 the appearance of authority or expertise or being in a position to know that is relatively
0:19:51 cheap to get.
0:19:54 So a classic example of that is .org.
0:20:03 The cost of getting a .org is like $12.95 on Yodan Namecheap and get a .org.
0:20:07 But someone might look at that and they might say, “Oh, it’s a nonprofit organization.”
0:20:08 But so it’s a cheap signal.
0:20:12 Being a nonprofit organization and having a bunch of people that talk about your work
0:20:14 over time and a bunch of different, that’s very expensive.
0:20:16 That takes a long time to cultivate.
0:20:18 Buying a .org does not.
0:20:24 In a similar way, having a good layout on these sites, there may have been a time where
0:20:29 in the 1990s, having a good layout to the site, having a crisp look, at the very least
0:20:31 it meant that you had some money.
0:20:35 You hired a web developer who could sling that code, get something up, cut it all up
0:20:39 in the Photoshop, and lay it all out in HTML dreamweaver or something.
0:20:45 It signaled something, maybe not always a lot, but it signaled, “Look, someone believes
0:20:47 in these ideas enough to fund it.”
0:20:48 It signaled something.
0:20:49 Nowadays, it signals nothing.
0:20:53 I think most people know this, but in case they don’t, you can get a website that looks
0:20:56 as good as your average newspaper.
0:21:01 Just go to WordPress, pick a template, start typing, and you’ll get something.
0:21:04 In many cases, it looks cleaner than your average newspaper, because if you’re faking
0:21:07 a newspaper, you don’t have to run dozens of ads.
0:21:09 That’s a cheap signal, too.
0:21:14 The people that want to fool you do is they look at all the things that people look at
0:21:18 to get a sense of whether something has a good reputation, and then they look at the
0:21:23 ones that they can get done in an hour or get done in two minutes.
0:21:26 They do that, and that’s what they use to fool you.
0:21:29 Whenever you’re looking at something, what we encourage people to do is think about how
0:21:34 hard would it be to fake that, and does that require getting in a time machine and building
0:21:40 10 years of relationships, or does that involve going to Namecheap and buying a domain name?
0:21:42 There’s a vast difference between the two things.
0:21:46 What we found in our work is that people made no distinctions between those.
0:21:51 As a matter of fact, people tended to overvalue the cheap stuff because it was more immediately
0:21:52 apparent.
0:21:56 We see it looking at the page, where they tend to undervalue the expensive stuff because
0:22:01 you had to go out, and you had to say, “Hey, if this guy is an expert in this, there’s
0:22:05 probably at least a newspaper article or two that quotes them as an expert.”
0:22:09 That sort of stuff took a little more effort, just a little bit more effort, but it’s so
0:22:14 much better evidence than the stuff that’s about the surface of the page, or the domain
0:22:19 name, or whether they have an email address you can mail, or whether there’s an avatar
0:22:24 picture of a real person who might be a real person, might be an AI person, might be some
0:22:27 other person that doesn’t know their picture is being used.
0:22:34 In this scenario, when you land at some organization’s homepage, would you also go to Wikipedia
0:22:36 and look up that organization?
0:22:37 Yeah, absolutely.
0:22:38 Absolutely.
0:22:44 In fact, one of the things we found Wikipedia is best for is telling you what an organization
0:22:45 is about.
0:22:48 That doesn’t mean telling you whether the organization is true or false, it’s like
0:22:51 a nonsensical idea, is it an organization true or false?
0:22:54 What doesn’t even necessarily mean is an organization credible or not.
0:22:58 It just means, “Is this the sort of source that I thought it was that I thought I was
0:23:00 getting my stuff from?”
0:23:03 For example, you mentioned some of these advocacy sites.
0:23:08 You might go to an advocacy site, and one person might go to an advocacy site, stop
0:23:14 minimumwage.com, and it says, “We’re a coalition of restaurant workers just looking to protect
0:23:18 our lifestyle with tips, and this bill is going to be horrible for us.”
0:23:21 One person might go to that and be like, “Okay, I know they’re not restaurant workers.
0:23:23 I know this is run by a lobbyist firm.”
0:23:27 But I’m interested in seeing what arguments the lobbyist firm is advancing.
0:23:28 If that’s your jam, then great.
0:23:29 Go wild.
0:23:31 I want to see what a lobbyist organization thinks.
0:23:34 I go to a lobbyist organization page, I find out what the lobbyist organization thinks.
0:23:38 Maybe they’re making a good argument, but maybe it’s something I should think about.
0:23:43 But yeah, for most people, when they come to something, they think, “This is a research
0:23:48 group, or this is a community organization, this is a grassroots organization.”
0:23:54 I should say, again, I don’t know, I’m just making names up here, so I hope that’s not
0:23:57 a URL that’s in play.
0:24:01 The idea is you come to that page, you think it’s one thing, you go to the Wikipedia page,
0:24:07 and it says, “Hey, this organization was originally founded by a coalition of the nuclear
0:24:11 energy industry and the coal industry.”
0:24:13 Maybe they have something interesting to say.
0:24:16 Maybe they’re something, I’m not saying their facts are wrong, but it’s also maybe not your
0:24:20 best first stop for a summary of what our energy future should look like.
0:24:23 You might want to go somewhere else.
0:24:24 Okay.
0:24:33 So now, tell me, do you think that 100 Twitter employees sitting in Austin will have any
0:24:37 impact on Twitter/X?
0:24:40 I guess it depends on what impact you’re thinking here.
0:24:42 The impact is a low bar.
0:24:43 Safeguard.
0:24:44 True.
0:24:45 Yeah.
0:24:53 Twitter has placed its eggs in the community notes basket, and this is a way that users
0:24:57 can add labels to things and rate them and so forth.
0:25:02 Say it’s inspired by Wikipedia, there’s some elements of it that are reminiscent of that.
0:25:03 There’s some that are not.
0:25:06 They’ve invested less in their trust and safety team.
0:25:08 I just say, “Approach these things with caution.”
0:25:13 On Twitter/X, I’ve been advising people to the extent they say on it to veer more towards
0:25:18 their following tab at this point than therefore you, because that algorithm to me seems like
0:25:23 it’s more and more tuned to just promote sensational content of a bunch of people that I’ve never
0:25:24 seen before.
0:25:25 Okay.
0:25:29 Mike, honestly, when you read the news that Elon Musk says, “We’re going to get 100 people
0:25:38 in Austin to address these issues on Twitter,” did you or did you not start laughing?
0:25:40 This is a yes or no.
0:25:41 Did I start?
0:25:42 I sighed.
0:25:43 Let’s say that I sighed.
0:25:44 Yeah.
0:25:45 Yeah.
0:25:51 I think if you want to do that at scale, you need to fund it at a better level.
0:25:52 I think it’s complex.
0:25:54 I do think it’s complex.
0:25:57 I do think that even old Twitter never quite had it right.
0:26:04 It’s a hard thinking about how to do moderation, how to do labeling, how to do contacts, how
0:26:06 to do all these various things.
0:26:07 It’s a lot more difficult.
0:26:13 I think that people recognize, you’re always looking at these competing goods that you’re
0:26:17 trying to protect, and you’re trying to do that in this fast-paced environment where
0:26:20 you’re making decisions in the moment.
0:26:26 I think from the perspective of our book, I think for the time being, you’re a little
0:26:27 bit on your own.
0:26:37 I hope we come to a future where context is rightly seen as a core competitive advantage
0:26:41 and community feature for any information offering.
0:26:45 We don’t see this as something that is an add-on, but we see, look, people are coming
0:26:47 to this for information.
0:26:53 Information has to be contextualized, and we should compete by providing the best contextual
0:26:55 engine for that information.
0:26:56 That means labels.
0:26:58 That means a well-staffed team, et cetera.
0:26:59 But we’re not there yet.
0:27:01 I don’t think people fully understand that.
0:27:09 My solution to this is that by default, a social media’s home feed, i.e., the stuff
0:27:15 that’s flying past you, it should be only the people you have manually followed.
0:27:18 Because at least that way, you can control.
0:27:24 If I only want to follow the New York Times, Washington Post, and NPR, I don’t want you
0:27:32 shoving shit into my feed from Rudy Giuliani and whatever, QAnon and all that.
0:27:34 It seems to me that would go along.
0:27:36 I would pay for that service.
0:27:41 I also like a platform called Blue Sky, and it’s got this idea of the customized feed
0:27:45 and you opt into a default feed, which is more or less what you’re saying.
0:27:48 Everybody in that feed, you’re following, and it has a very simple algorithm you can
0:27:49 understand.
0:27:55 So Blue Sky algorithm was people you follow in content that got 12 lights, 12 was the
0:27:57 magic number for a while.
0:27:59 And then, yeah, you could choose other feeds.
0:28:02 If you want to go a little wider, you could go a little wider.
0:28:05 If you want people you don’t know who are talking about sports teams that you like,
0:28:10 but maybe not specifically with a hashtag, you got something that pulls that all together.
0:28:14 So I do think that thinking about the user experience in that way is probably the future
0:28:15 there.
0:28:20 But right now, yeah, right now, a lot of platforms is one feed and on Twitter, a lot of feed.
0:28:26 Okay, so next loaded question.
0:28:33 What do you make of Facebook blocking searches on threads about COVID?
0:28:37 And they’re saying that, oh, you can’t search for the word COVID because it’s going to lead
0:28:38 to disinformation.
0:28:41 Honestly, my head is exploding.
0:28:43 This is Facebook telling me this.
0:28:45 Yeah, I don’t think it’s good, obviously.
0:28:50 Generally, you do want people to be able to find the information they need on the platforms
0:28:51 that they’re on.
0:28:56 I think the current policy environment is such, and the current political environment
0:29:01 is such that there are some subjects that are just a headache to these platforms.
0:29:04 Yeah, I look at Facebook decisions like that.
0:29:10 And what I see is not someone that wants to be like some sort of Orwellian 1984, I see
0:29:16 a company that keeps on getting called in front of Congress half the time by Democrats
0:29:21 and half the time by Republicans is worried it has a lot of headache doesn’t actually
0:29:27 sell, they’re not selling a lot of ads next to COVID information and just would like the
0:29:29 headache to go away.
0:29:34 But I don’t think it’s a great solution because, you know, I mean, it’s a great solution.
0:29:38 If your site was like about knitting and a bunch of people are posting about COVID, you
0:29:42 might just say, look, no more COVID stuff on the knitting site, it’s a headache, I don’t
0:29:43 want to deal with it.
0:29:47 But if your site’s Facebook, that’s different, I don’t think it’s a great solution.
0:29:49 Up next on Remarkable People.
0:29:54 You ask an academic going and looking at a new area that is adjacent to theirs, like
0:29:57 they’re trying to flex into a new area and they’re trying to understand, like, what are
0:30:02 some of the consensus opinions of this field, they go to Wikipedia sometimes because you’re
0:30:07 going to get a really clear summary there of what that is.
0:30:12 Become a little more remarkable with each episode of Remarkable People.
0:30:18 It’s found on Apple Podcasts or wherever you listen to your favorite shows.
0:30:24 Welcome back to Remarkable People with Guy Kawasaki.
0:30:29 Since we brought up the subject of COVID, so let me ask you, Mike, let’s say that one
0:30:36 day you wake up and your ears are ringing, okay, and never been ringing before.
0:30:38 Now they’re ringing.
0:30:44 So you, Mike Caulfield, where do you go on the internet to investigate this ringing in
0:30:45 your ears?
0:30:49 I probably go to Dr. Google, like everyone else.
0:30:50 And?
0:30:51 Yeah.
0:30:55 The first search that you’ll do will tell you that just as you suspected, it proves you
0:30:56 have cancer.
0:31:00 And then you got to think about what you did wrong with that search.
0:31:02 So yeah, this is the thing.
0:31:06 You do a search, you get a set of search results back, and one of the things that we really
0:31:10 encourage people to do is look at that set of search results and think, is this, not
0:31:12 is this the answer I want?
0:31:16 That’s not what you want to engage in, but is the set of sites coming back, the sorts
0:31:21 of sites I want, and are they talking about the things that I actually expected they’d
0:31:22 be talking about?
0:31:23 Yeah.
0:31:27 I do joke that sometimes when you go to Google and search your symptoms, it always seems
0:31:31 like you’ve got some tragic illness at first, and then it turns out maybe you just have
0:31:32 swimmers here.
0:31:36 But you put in your symptoms, sure, and you execute that search.
0:31:39 And then you look at that page, and one of the things you want to do, one of the things
0:31:43 that Google has now is these three little vertical dots on each result.
0:31:48 And if you’re trying to figure out, hey, who on this page might I want to get an answer
0:31:49 from?
0:31:52 You can click on those dots and you can find out, oh, look, this particular center is a
0:31:53 community hospital.
0:31:57 This particular site I have no information on, I don’t know who they are.
0:32:01 This particular site is a well-known site that sells supplements, right?
0:32:05 And so you can kind of browse and you can kind of figure out where you want to go.
0:32:09 Just what you want to do in that case is you do the search.
0:32:13 You find something that seems like it’s a good source of information.
0:32:15 You check on the vertical dots.
0:32:20 You go to that site and maybe you do a search on that site, if you’re on the myoclinic or
0:32:21 something like that.
0:32:26 Maybe you do an internal search on the myoclinic at the point you find a site that you trust.
0:32:31 And also we talk about in this book, don’t give Google tells.
0:32:36 Don’t give it clues of what you want to hear, what you expect to hear, what you’re worried
0:32:37 about hearing.
0:32:39 Try to be very bland with it.
0:32:43 So yeah, again, if you type in ears ringing, is it cancer?
0:32:47 You’re going to get a lot of pages back that tell you yes, it’s cancer.
0:32:52 If you type ears ringing, common explanations, you’re going to maybe get something that might
0:32:56 be a better first stop.
0:33:00 How about we’ll tumor cure the ringing in my ears?
0:33:01 Exactly.
0:33:06 You put those words together, like in general, you’re more likely to get something back that’s
0:33:08 going to say that.
0:33:11 If you wanted to do that, again, you can cue Google in these ways.
0:33:15 You’re just trying to add words that Google has a synonym engine now too, so you don’t
0:33:17 have to be perfect with this.
0:33:22 You just try to put words that try to signal to Google the type of answer that you’re looking
0:33:23 for.
0:33:28 So if you wanted to put, would this spice cure cancer, and you really wanted to know,
0:33:33 you might put something like fact check after it, to say, look, I’m not looking for something
0:33:37 that says this, I’m looking for something that investigates this, right, that actually
0:33:38 checks this.
0:33:42 So you’re going to use various, we call them bare keywords, don’t get too fancy, there’s
0:33:47 a whole Google syntax, I would not bother to learn it because my experience with other
0:33:49 people has been, they forget it.
0:33:54 Invest your time thinking about, look, I have my query, what’s a word that’s going to signal
0:33:56 the sort of genre of thing I want back?
0:34:00 Is it the spice cures cancer, fact check, is that what you’re looking for?
0:34:03 Is it the spice cures cancer, something else?
0:34:05 Why do people think this sort of thing?
0:34:09 Come up with a keyword that kind of cues Google to give you the sort of answer.
0:34:16 I want you to explain this, you’re telling me that if I ask a question like that, and
0:34:22 I add the two words, fact check, I’m going to get a better response.
0:34:25 If you want a fact check, you’re going to likely get a fact check.
0:34:30 Oh man, this is worth the price of admission, I didn’t know that.
0:34:35 And it’s not anything like built into Google, it’s just the fact that two things, one, when
0:34:39 you put in fact check, Google has a synonym engine that it runs things through.
0:34:44 So it goes, it looks not only for fact check, it looks for things that might be synonyms
0:34:49 of fact check and so forth, it comes up with a series of terms that it expands and it sends
0:34:50 out there.
0:34:57 If your search has that and fact check, reality check, checking the truth, whatever it is,
0:35:00 it goes in and it put ages to have it to the top.
0:35:07 So what if I said Donald Trump won the election, fact check, what would happen?
0:35:11 You would get a fact check that would say probably, I think, I hope would say that Donald Trump
0:35:16 won the election in 2016 and did not in 2020.
0:35:17 Okay.
0:35:18 Yeah.
0:35:23 How much credence do you give Google News as opposed to Google?
0:35:25 I used to like Google News quite a bit better.
0:35:30 It was a little more integrated with the main product, which means that you could jump back
0:35:31 and forth.
0:35:33 Now, it’s an interesting thing right now.
0:35:41 I generally find that if you use keywords, these bare keywords in the Google search,
0:35:47 like write the subject you want and write, if you want a newspaper article, write like
0:35:50 newspaper article and it goes through, does this whole cinnamon thing, cinnamon, cinnamon,
0:35:51 cinnamon thing.
0:35:55 And he says, I find it gives you pretty good results.
0:36:00 If it doesn’t, then I do say, well, okay, if it’s not getting what you want, maybe go
0:36:02 to Google News.
0:36:08 But Google News right now is this hybrid of a sort of a news reading environment and a
0:36:11 search engine and a number of other things.
0:36:13 There’s some good features in it too.
0:36:18 I generally stop at Google First and then I go to Google News if it hasn’t worked out.
0:36:21 And what about Google Scholar?
0:36:22 Google Scholar can be really helpful.
0:36:28 There’s a lot of criticism of Google Scholar, some of the ways that it calculates citations
0:36:31 and everything are as perfect as some of the old, more manual ways.
0:36:33 Certain things in there can be gamed.
0:36:38 There’s a recent paper out on gaming, gaming Google Scholar through a variety of means.
0:36:39 So it’s not perfect.
0:36:44 Again, part of it is in this sort of world of is this what I think it is?
0:36:49 If someone says they’re a published academic on some subject, go into Google Scholar and
0:36:52 say, hey, did this person write anything and did anybody cite it?
0:36:53 That’s pretty good.
0:36:57 You can also type in, if you’re interested in a journal, type in the name of a journal
0:37:03 into Google and type in the words impact factor, which is like a number that people use to
0:37:04 measure journal influence.
0:37:06 It’s not a precise number.
0:37:10 It just tells you for every article published, how many times is it cited?
0:37:15 You take all the articles in a journal over time and then you look at how many times that
0:37:16 journal was cited.
0:37:18 What’s that ratio?
0:37:22 But yeah, if it has no impact factor, I worry.
0:37:29 And will this impact factor, will it show you that there’s now like scientific journal
0:37:35 farms where you pay to get published so that you’re cited and you can cite something?
0:37:43 And will Google Scholar tell you that this is a PO box in St. Petersburg that has published
0:37:47 2,000 articles about turmeric and tinnitus?
0:37:50 Yeah, Google Scholar won’t tell you that.
0:37:52 I know that they’re trying to.
0:38:00 As you probably know, every online information service is just a history of a war with some
0:38:02 form of spam.
0:38:04 And Google Scholar is exactly the same way.
0:38:08 There’s ways to spam that system and get stuff to look like it has more credibility than
0:38:09 it might.
0:38:13 But that does tend to be at the margins still.
0:38:19 And the other pieces, you don’t necessarily have to use one method.
0:38:25 One of the things that people have misconstrued about science and scientific articles is,
0:38:28 “Oh, you’re going to read one article and it’s going to give you the answer,” and that’s
0:38:29 what science is.
0:38:34 Like, “Oh, there was a scientific article that showed X,” and the news kind of follows
0:38:36 the same trend.
0:38:39 And that’s not really how things work.
0:38:43 What actually happens is this article seems demonstrated, this article seems denied, this
0:38:48 article pulls together the articles that seem denied and the articles demonstrated into
0:38:52 something called a meta-analysis and the sort of progress of things like this.
0:39:01 People tend to get too caught up on the individual article that proves everything rather than
0:39:05 like, “When I survey this area, what do the bulk of people say?”
0:39:07 You don’t have to agree with the bulk of people.
0:39:11 I’m well-known for disagreeing with the bulk of people many times.
0:39:16 But you do got to know like what the sort of consensus is, you got to know before you
0:39:18 go against the consensus, you got to know what it is.
0:39:25 And so we do recommend something called zooming out, which is rather than jump immediately
0:39:31 into, “Oh, I found this article that shows XYZ,” step back and try to say, “Okay, what
0:39:32 do the bulk of people say?”
0:39:37 Like, “Can I find an article that summarizes what the research has set up to now?”
0:39:39 Sometimes that place is Wikipedia.
0:39:44 Sometimes one of the things that we have found in talking to academics, as much as academics
0:39:48 suspect Wikipedia when they’re teaching their students, they think it’s a little fishy,
0:39:53 asking academic going and looking at a new area that is adjacent to theirs, like they’re
0:39:56 trying to flex into a new area and they’re trying to understand like, “What are some
0:39:59 of the consensus opinions of this field?”
0:40:03 They go to Wikipedia sometimes because you’re going to get a really clear summary there of
0:40:05 what that is.
0:40:07 Okay, last question for you.
0:40:11 So I got to tell you, I don’t know which way I should ask this because what I’m trying
0:40:13 to get at is this.
0:40:20 If you’re a parent, what would you tell your kids are the best practices for figuring out
0:40:21 the truth?
0:40:27 But I also could make the case that if you’re a kid, what do you tell your parent about how
0:40:29 to figure out what’s the truth?
0:40:31 So just give us a checklist.
0:40:33 These are the best practices.
0:40:35 All right.
0:40:40 So you want to know who produced your information, where it came from, and you want to know if
0:40:44 there’s a claim being made, if there’s some sort of assertion being made.
0:40:47 You want to know what other people that are what we call in the know, that have some sort
0:40:51 of more than usual knowledge about the thing, you want to know what they think of that claim.
0:40:52 And that’s just where you start.
0:40:55 Like you should know where your information came from.
0:40:58 You should know what other people have said about that issue.
0:41:03 And if you make sure that you’re doing those two things when you enter a new information
0:41:05 domain, you’re going to do better.
0:41:09 If you don’t do that, what happens is you get pulled down this sort of garden path of just
0:41:11 never ending evidence.
0:41:17 Some people criticize me for when I talk about this as the rabbit hole, but the rabbit hole,
0:41:22 it’s not even whether the rabbit hole is true or not, the sort of conspiracies rabbit hole.
0:41:26 It’s that if you find yourself being pulled from piece of evidence to piece of evidence
0:41:29 to piece of evidence to piece of evidence, and you’re never backing up and you’re saying,
0:41:30 hey, where did this come from?
0:41:32 Who produced it?
0:41:33 Number one.
0:41:35 And then two, what’s the bigger picture here?
0:41:36 What do people in the know say about this?
0:41:37 What should I know?
0:41:43 I end up endlessly doing what I call evidence foraging, but you’re not getting any benefit
0:41:44 to it.
0:41:45 It’s almost compulsive.
0:41:47 So what you want to do is you want to think about those two things.
0:41:52 You want to select the stuff you’re consuming a little more carefully and intentionally.
0:41:57 In the second to last chapter in the book, we talk about some of that as critical ignoring.
0:42:02 Figuring out rather than just drink off the fire hose of the internet, figure out what
0:42:09 you want, figure out what you would think would constitute good evidence, good sources,
0:42:12 invest the effort to figure out if that’s what you’re doing, or if you’re just sort
0:42:16 of running from quick hit to quick hit on TikTok.
0:42:25 And just for greater practicality and usefulness, when you say go and find out what people in
0:42:32 the know are saying or what’s the sort of common understanding of something, where do
0:42:33 you get that?
0:42:34 Yeah.
0:42:35 Yeah.
0:42:36 You probably start.
0:42:41 This is not the end of it, but you probably start with typing in something like history
0:42:44 of Israel Gaza summary.
0:42:49 And then looking at that page and saying, okay, according to my own standards, which one of
0:42:50 these would be the best?
0:42:53 And one of the things I want to stress is you might want something that’s very dry.
0:42:57 You might want something that has a little bit of an activist lean, but knowing what
0:42:59 you’re looking at, choosing that.
0:43:03 And so what’s happening there is you’re in this sort of TikTok loop where you’re in this
0:43:08 little Twitter doom scroll, and you’re taking your agency back.
0:43:11 You’re stopping and you’re saying, okay, where did this come from?
0:43:16 If it’s not the sort of thing I want when I search Google or grow somewhere else, then
0:43:17 what do I want?
0:43:20 And then usually you’re using some form of search to get there.
0:43:25 But it’s about taking your agency back and asking those important questions.
0:43:30 And if you don’t get good answers to them, deciding to go somewhere else.
0:43:31 Okay.
0:43:37 Listen, I think your book and the work you’re doing literally could help save the world.
0:43:41 And Matt, I hope every school teaches a course like this.
0:43:42 I mean, wow.
0:43:44 Yeah, we do too.
0:43:47 So that’s my Caulfield.
0:43:54 I hope you learned a few things about sifting through what we hear and see and read online
0:43:58 to help us determine what to believe and what not to believe.
0:44:04 Don’t forget to check out his book called Verified, along with Sam Weinberg, another
0:44:06 remarkable people guest.
0:44:11 So, honored you go, Seeking the Truth, I’m Guy Kawasaki.
0:44:17 This is Remarkable People, my thanks to the rest of the Remarkable People team.
0:44:24 That would be Shannon Hernandez and Jeff See, Sound Designers, See, Not Sift, See.
0:44:29 And then there’s Madison Neismar, producer and co-author of Think Remarkable.
0:44:32 That’s the book you all should read.
0:44:39 Tessa Neismar is our researcher, and then there is Louise Magana, Alexis Nishimura,
0:44:40 and Fallon Yates.
0:44:48 We are the Remarkable People team and we are on a mission to make you remarkable.
0:44:52 Until next time, mahalo and aloha.
0:44:57 This is Remarkable People.

In this compelling episode of Remarkable People, Guy Kawasaki sits down with Mike Caulfield, a renowned research scientist from the University of Washington’s Center for an Informed Public. Caulfield introduces his groundbreaking SIFT methodology, a crucial tool in the fight against online misinformation that empowers educators and learners to critically assess online content. Discover how SIFT – which stands for Stop, Investigate the source, Find trusted coverage, and Trace back to the original – can help you navigate the complex world of digital information. Caulfield also discusses his book Verified: How to Think Straight, Get Duped Less, and Make Better Decisions about What to Believe Online, co-authored with fellow Remarkable People guest Sam Wineburg. Join us as we explore the importance of digital literacy and learn practical strategies to determine what to believe in an era of information overload.

Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable. 

With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People. 

Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable. 

Episodes of Remarkable People organized by topic: https://bit.ly/rptopology 

Listen to Remarkable People here: https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827 

Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally! 

Thank you for your support; it helps the show!

See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Leave a Comment