PAT: Today on Flip the Switch. Google is pledging 300 million dollars to help combat fake news. And Walmart is looking to turn a very important insect into a tiny robot army.
It’s time for Zuckerberg to face the music as Facebook catches a ton of heat for allowing access to the private data of millions of users.
Our main discussion focuses on the current state of technology as the guys wonder where we are headed next after a tough month for some of tech’s biggest giants. Let’s get into it.
00:52 AUSTIN: Welcome to Flip the Switch presented by power Digital Marketing. We’ve got a very fiery episode for you today. And what number is this Patrick?
00:59 PAT: Episode 29. Our John Smoltz episode. So there’s the next one. We’re just going to keep going with that, because we’re already too bought in to not do it anymore.
01:06 AUSTIN: Absolutely. I love to see what you’re going to come up with next. I hope you all are catching on that we’re doing sports players, even though this isn’t a sports podcast. But that’s beside the point, Pat.
We’ve got a lot of good business news and trends to talk about. Plus a lot of stuff coming out about Facebook. We’re going to get into all of that and much, much more.
01:23 PAT: Exactly.
So I think a great place to start off is with our business news and trends. Per usual.
And the first article that we’re taking a look at today was actually published on BBC news. And the headline is “Google pledges 300 million dollars to support journalism and fight fake news.”
So very timely topic, especially as we’re going to get into with some of the issues that have prevailed with some of the biggest tech giants that we’ve seen. A lot of them have been subject to fake or erroneous news being published consistently.
01:50 AUSTIN: And very timely, as we’re talking about. So as you all might have known since you’re paying attention to the news. This is the hottest topic, right? Of any company that’s out there right now. That has any type of data or information that they’re giving to other people.
So Google being the most informational friendly company in the world, with such a large audience of searchers, means that they need to come up with an idea on how to make that information valid or real or prove to people that it is factual or informationally driven. In that it’s not a lie.
02:22 PAT: Yeah. And if you think about it that makes total sense. The entire premise that Google was founded on was free, accurate information to anybody at their fingertips. And when that gets called into question, when you start seeing that even Google was criticized for promoting fake articles.
Like in 2017 there was one about a very sensitive topic. It was actually the Las Vegas shooting. And there were a lot of articles that were published about it saying that the shooter was a Democrat who opposed Donald Trump. Which is completely untrue and unfounded.
02:54 AUSTIN: Yeah. And it turns out that Google was more just on the side of just free information. And then also what happened is most relevant or hottest topic information. So they weren’t necessarily going the factual route. Their algorithm was prioritizing articles, news that was getting the most searches.
Some people actually figured this out, and one of them… I believe it was the website 4chan had a group where they were doing what you’re talking about. Where they understood that the algorithm was going to come after something that was receiving the most hits.
And then they would pump that page. That article with a ton of hits, a ton of sessions. So that then the algorithm would prioritize it. So that really feeds into how the Google algorithm was not thinking on a factual… factually based informational standpoint. And more so just on priority of a hot topic.
03:41 PAT: Yeah, exactly. And I think the issue has become for a lot of these bigger tech companies… Facebook as well, is another one that was well documented and promoting fake news. And exploiting user data in their effort to do so.
Which that’s a whole ‘nother thing that we’re going to talk about. But with Google, specifically, the question was always how are they going to fix that? You can change the algorithm but people always learn ways around the algorithm.
They learn through trial and error how it operates and they find loopholes to get to whatever end result they want. Unless big changes are made… And what we saw with Facebook at least was the suppression of fake news or even the publishing of trending news was perpetuated by individuals. Each with their own biases, their own identity and their own story that they’re trying to tell.
What Google has done is actually a little bit more impressive to me. It launched a project that is actually called “Media Wise.” Which is actually a partnership with Stanford University. Ever heard of it?
04:33 AUSTIN: Great school.
04:34 PAT: Yeah. I wouldn’t know. They actually help young readers distinguish fact from fiction online. That’s their whole mission.
So Google is partnering with this company more as that PR play. I don’t think this is anything that’s a huge issue for them right now. The algorithm kind of takes care of repressing those fake or erroneous bits of news.
But now they have a means of fact-checking it all. And they’re also trying to educate the younger populace on how to distinguish real from not real. Fact from fiction.
05:03 AUSTIN: Yeah. And Google is very protective over their algorithm. So don’t think just because they’re partnering with a company to work on factual news–I think this is the whole point of it–that that means they’re going to have inside information on the algorithm. On how the algorithm works.
No. Google is trying to… taking the initiative. They’re attempting to take the initiative on a subject that all their searchers are upset about. And that is fake news. So it’s a very good PR play as Patrick is saying to team up with a school that’s very highly regarded. Stanford is very important and intuitive.
And then, of course, they can work with them to educate a population that will be producing this news. Journalism is a very important degree there. And then also, of course Stanford is producing the next great minds inn journalism, as they get into that age.
So they’re working to be in partner with Stanford to say, “Hey, we’re creating a new wave of individuals that can be factually, informationally correct. That we can verify. That will show up on Google as the “correct” way to say something.
06:01 JOE: So if they’re going to be producing their own news and working through the journalism of it, will they have the power to push other people out of the way to make sure theirs is first?
06:11 PAT: That has always been the question. And that has always been something that I think these bigger tech companies have struggled with, right? Because at a certain point, it is a PR play. You do want to be represented as the thought leader in the space as well. It does go hand-in-hand to an extent.
My gut says no. Because there’s been scrutiny on a multitude of other platforms where this has happened. And anything where the user feels that they’re access to information is being manipulated, repressed in any way. Altered in any way aside from what they believe to be true. Or what they’re actually seeking.
That’s going to cause a bad experience. Google is taking active measures every single day to enhance the user experience. So I don’t see them doing anything like that.
06:56 AUSTIN: Yeah. It looks like kind of the direction they want to head. Cause they know that that’s very murky waters. If they’re going to start prioritizing their own news, they can really get into censorship issues, which is by law not allowed. So that would be a very tricky road to go down.
But it looks like they might do something more on the subscription side, is what we’re understanding. So that they’re going to be launching a new initiative called “Subscribe.” Which will be a paid subscription with these publishers that will allow you for this type of information. So it’s a gated circulation of information that they’re saying will be factually correct. With a bias… we don’t know how that’ll come into play. We would hope and expect this to be unbiased, but it’s your choice.
07:37 PAT: It’s so, so smart because the first thing that they’re doing is validifying with the populace, “Hey, we’re doing everything we can to give you correct news. And then on top of that, later on, maybe you’ll subscribe to…”
You know what I mean? It’s totally kind of like that lead gen play in a way. But I think that the methodology behind it is correct. And I think that in order to stand out in today’s day and age and come out as a company that is taking proactive steps to repress this type of thing from happening before there is a massive scandal, like there has been with Facebook. Only going to bode better for them in the long-term. So great move by Google there.
08:10 AUSTIN: I think the jury out on this one is this is much more of a PR play than just a transparent, for the better of humanity play. So for Google we understand what they’re up to here. This will be a revenue driving scheme down the line. So expect to see this roll out I think in the next year or two. We’ll keep a pulse on this to see if it actually is unbiased and factual.
08:28 PAT: Put that one in our pocket for a little bit later on…
08:29 AUSTIN: Absolutely.
All right. Moving into our next one. We got something a little freaky. That is for sure. We’ve got why Walmart is patenting to create robot bees, am I right?
08:40 PAT: Yeah.
08:41 JOE: This is an episode of “Black Mirror,” right?
08:42 PAT: This is literally an episode of “Black Mirror.”
08:44 AUSTIN: I feel like we’ve been saying that way more and more as the episodes have been progressing.
So we have another Black Mirror interpretation as Walmart will be creating drone bees. Joe, can you elaborate little on this? I know you brought it up…
08:56 JOE: Yeah, it looks like they filed a patent to create this army of robotic bees for cross-pollination purposes since there is a huge problem with bees these days. And them going extinct. And how that’s going to affect that food supply and everything.
09:14 AUSTIN: I believe there’s a syndrome where their hives have been collapsing. And that’s been the biggest issue in the last 5 or 10 years is there was a disease going around where their hives were collapsing. Of course, that was a very big issue, because bees pollinate all of our plants. So this has actually been a larger issue that I know has come to light in society just in general. As time has progressed.
But it sounds like Walmart has come up with their own plan on how to solve this, and this is just these flying drones.
But as one of the largest companies in the world–they could be their own country with how much revenue they generate–I don’t know if I necessarily like the idea of them having their own drones.
09:46 JOE: A lot of this article talks about how they follow suit with a lot of things that Amazon’s doing and how they’re trying to expand outside of just their typical thing of being a store that sells low-price goods. And so they’re trying to get more into the tech area.
10:01 PAT: So that totally makes sense to me. But if you look at it black and white too, it’s very much a business decision that’s designed in the long-term to help with their margins and their projectability. If you think about it–and actually a member of CB insights who’s quoted in this article brings this point up. But he says, “By taking more control over how it’s produce is grown,” because let’s not forget, Walmart is a huge grocery shopping destination too because of those low prices–which again trying to compete with Amazon a little bit there. Always making sure they have enough of a supply.
But Walmart could potentially be saving on costs because they’re vertically integrating the food supply chain. They’re managing crop-yields way more effectively. They’re increasing its emphasis on transparency and sustainability to attract shoppers.
So kind of tying in with that Google conversation–little bit of a PR move. You wanna show, “Hey, there’s this issue in the world that we can fix with our technology. Aren’t we awesome? You should patronize us with your business.”
At the end of the day, though, this is still really freaky stuff. You know what I mean? You can chalk it up however you want. It might make business sense, it might make environmental sense. But at the end of the day, there’s going to be tiny robo-bees flying around pollinating plants all over my neighbourhood in the next few years.
11:17 AUSTIN: I think the idea of the robot bee impacting agriculture is kind of what to take away from this. Because I think you nailed it, Patrick, when you said the control of the yield of the crop. I know that that’s been one of the largest issues with agriculture. One is the amount of water we use.
So the irrigation is a very large impact on our water supply as a society. And then of course, it’s become more difficult to grow with just the climate change aspect of it. So if you have a certain way to take that element out–the natural element. And impact it with artificial intelligence, I think that that will be used much more broad across the sense and then of course, we can see what the government who subsidizes agriculture. This might become some sort of program where they’re working with… Maybe whoever’s working with Walmart to design these type of products and then to roll them out.
So we could see a vertical integration across the board. With what Walmart is actually doing.
And this could be impacting agriculture. And then funded by the government. Maybe that is a little bit far down the line, but it definitely looks like this is something that they’re interested in.
12:19 PAT: I think that this… it’s like a stability play. Because let’s say they’re able to pull this off correctly… and again, the jury’s still out on that. I don’t necessarily have the belief that Walmart is going to have the capabilities or infrastructure to deploy an army of small drone bees.
But let’s say that happens exactly how they say it’s going to.
12:40 AUSTIN: Let’s say it does.
12:41 PAT: Which, asterisk next to that one. And I’m going to go on record as saying that that’s not going to work. So when we get to cover that story…
But I think the issue to is more just like they’re trying to have more financial stability. If you can control the yield and forecast for the exact amount of crops that are going to be yielded in a certain year, you have finance forecastability. Which is a luxury a lot of retailers do not have nowadays.
Retail locations are going down left and right. We saw even with Toys R Us recently closing all of their stores. It’s a rocky time to be a retailer and they’re seeing a giant like Amazon consistently innovating. Going outside their space. Kind of like Joe touched on.
It’s a way to a) try to prove that thought leadership. Show that they’re innovating. Find new ways to attract customers I guess. But at the end of the day, too for me, I think it’s a little bit more of a financial stability play. Because they can forecast a lot more accurately these types of costs than they probably could before. Because the retail climate is probably not great, but also for the agricultural reasons that we talked about. Irrigation and climate change.
13:47 AUSTIN: And it seems for Walmart that expenses have risen over the past year as we’ve seen them have to cut down on Sam’s Club. Which they do own. Which is the chain of stores similar to Costco that they had to start closing…
13:57 PAT: Never been inside a Sam’s Club…
13:59 AUSTIN: Yeah. That’s more of a Midwestern… that’s out there… Costco owns the west coast so we don’t see a lot of those…
14:04 PAT: I have no concept of what happens past about Arizona…
14:06 AUSTIN: Yeah. Absolutely. We would never leave this beautiful state. And so then they raised the minimum wage for their employees at Walmart. So expenses are rising. They’re going, “Hey, what can we do to generate new revenue?’ Maybe this is potentially that in that 5 to 10 year span if this becomes more mainstream.
So we will see. It’s a great concept. It’s an interesting idea. It’s also scary. We will follow it and let you all know if we see any more drone bees out there.
14:30 PAT: Exactly.
All right, moving into the last piece of news here. And this is a big one for us. So when we started putting together this episode, it was the first thing that we were going to talk about was this kind of expose article that came out on the Guardian. Talking about Cambridge Analytica and how they were able to successfully create political profiles and serve Facebook users ads swaying them to vote for Donald Trump in the election in 2016.
So that was going to be the story. We were going to talk about the data breach, we were going to talk about consumer safety. A lot of the conversations that we’ve had before, maybe never to this degree.
Since we even put the outline together, we have seen two new developments come out about this. So the first one was that Mark Zuckerberg and several other key members of Facebook’s leadership team had yet to comment on anything that had happened.
Since that article came out and since that knowledge became more widespread.
Right off the bat, terrible, terrible move. Terrible move. You have to immediately come out and either denounce everything that happened. Or have a damn good reason why it did.
Either these aren’t true or these hackers were good. Something to that effect.
And we even consulted with our PR department about that here to see what should have been done. They said, “Yeah, definitely.”
And then even as we were getting ready to start recording this episode, Mark Zuckerberg had been called in front of the Senate to testify about some of these… not allegations, but what seems to be facts. Which seems to be facts.
And Zuckerberg has since come out with guidelines talking about how he’s going to better protect user data moving forward. The three things that they said they’re going to do is 1) investigate all large apps that were allowed to get data, not just on their own users, but on those of user’s friends. Before Facebook changed its policies.
The second one is to remove developer access to data if someone hasn’t used that app in 3 months and reduce the type of information the app gets when users sign-in…
16:31 AUSTIN: So let’s back up right here and think about this from a really high level and why we’re talking about this. And why it’s so important.
50 million Facebook users have had their data siphoned from Cambridge Analytica. This company that is not an American company…
16:45 PAT: A UK based company that was consulting the Donald Trump campaign in 2016.
16:48 AUSTIN: That is ground-breaking. That is a huge, huge, huge issue.
16:54 PAT: It’s a massive violation of privacy.
16:55 AUSTIN: It’s a massive violation of privacy. It’s a massive violation of the law, which is the most important thing to this country is our constitution and our constitutional rights. It’s what makes us a great democracy.
So Facebook has been a very important piece into possibly someone getting elected. But the important thing for us to talk about as a technology based podcast is what does this mean for the tech giant, right?
We want to get behind the business side of this. And the technological side of this. And what we’re looking at with Facebook and all that is their social network.
17:25 PAT: Yeah. I totally agree with that point. And just really quickly, going back to the last… cause this kind of ties everything together.
Moving into the last piece that Zuckerberg says. He says, “We’re going to work to make sure that people understand who has access to their data. Show everybody a tool at the top of the news feed in the next month which makes it easy to revoke permissions.”
So that’s going to be huge. My take on this is that it’s way too little, way too late. Our… we don’t like getting political on this podcast, but like we’ve said before, oftentimes politics and technology nowadays intertwine in a way that we can’t separate the two.
18:01 AUSTIN: Yeah, and it wouldn’t matter regardless of the party. This is just a very, very bad thing to happen. Regardless of whatever political party is behind it, the fact that data is being stolen… private data is being stolen off a social network. Allowed to be stolen by the social network, by a country outside of the United States.
This makes this the biggest issue of privacy and protection of American rights.
18:25 PAT: This is really… I remember reading an article about this and the Equifax data breach notwithstanding, this was one of the biggest violations of personal privacy in the history of the internet.
18:37 AUSTIN: Yeah. And at least that one came out as a hack, right?
18:40 PAT: Yeah, that one was a hack. They went through the firewall. And Equifax had deniability. They’re like, “Yeah, they got through. We have to make it better.”
People can live with that to an extent because at least they learned from it and the fixed it. It’s a little bit of a mistake. Still not happy that my social security number is floating out there in limbo.
But with Facebook, the way that this data was pulled… it was pulled in a way that the platform not only allowed for but almost encouraged. Because a big way that Facebook has made its money up until regulations started getting a little bit more tight around advertising and personal data usage–was that it was pretty easily accessible. If you had the developer app… If you’re a Facebook developer you can easily go into a back end and see where… who is interacting with what pages, what that means as far as their likelihood to do this.
The impressive part about Cambridge Analytica to an extent is that they were able to profile these people politically based on their social interactions. Based solely on data. They were able to quantify their likelihood for voting for a certain candidate based solely on data, which they then used to deploy ads to convince people to vote for Donald Trump.
So bouncing back from this, my question for Mark Zuckerberg is 1) Do you really think that making sure that people understand who has access to their data is going to fix a breach? How is that really going to help things moving forward? Are these just soothing words that you’re trying to get everybody to calm down a little bit? Because we’re going to see how Wall Street responds to this pretty soon here, but it’s not going to be pretty.
20:16 AUSTIN: It hasn’t been good so far. They were down 10% after hours when the news dropped earlier in the week. We’ll have to check, and Pat’s going to pull it up right now to see where we’re at.
But what Pat is saying is the business side of this is detrimental to a fault. So we could be seeing a very massive drop in their share price due to their Daily User Activity will drop and has already dropped. We’ve talked about this in the past. The big thing for these social network companies is their Daily User Activity. And when something very wrong happens–we’ve seen this with Snapchat–that drops substantially.
Which means your valuation as a company also drops with it. You no longer have the value that you’re providing to your shareholders to make this a good investment, and a long-term investment.
So Facebook is going to have to deal with that.
20:57 PAT: Yeah so let’s take a look at the numbers a little bit. So on March 19th Facebook was trading at $185 a share. Today it’s up 1% so far. After hours it’s up .34%, which is surprising. I think that people are just buying because it’s low. But it dropped down to $169 a share. That’s approaching the $20 per share dip.
And the scary part about that if I’m a Facebook investor is that the FTC hasn’t even levied its sanctions yet. So this was basically… the FTC, they’re charged with consumer protection. That’s their biggest and only priority.
So if the FTC finds that Facebook violated the terms of the consent decree, it has the power to fine that company more than $40,000 a day. So we’re going to see that price probably start to slide a little bit more. But this also opens up that bigger conversation that we want to have today around user privacy and whether or not the way data’s moving is good or bad.
Awesome. Moving into our main segment here, we want to continue the conversation that we sparked a little bit with that business news. So we talked about how Facebook is going to be facing some pretty significant consequences as a result of this massive data breach that basically defrauded our democracy–to put it lightly.
But that really brought about a topic that we want to talk about today. Which is with the current state of technology, and the direction that it’s moving. And the access that technology has to your personal data, and your preferences and able to profile you.
Is that a good or a bad thing?
22:31 AUSTIN: Both. We’re going to get into it, but both I think.
22:36 PAT: Main topic over? It’s both?
All right, we’ll see you next week.
22:38 AUSTIN: Thank you very much. Have a great day.
No. I think what we’re trying to say here is that the intentions were good. I don’t think Facebook came into this going, “Hey, we’re going to steal everyone’s data, and give it to a foreign country and they’re going to influence our politics.”
That was definitely not the case. What they did this originally for is… have you heard of the game called “Farmville?” I’m sure you have Pat. You look like a guy who plays a lot of Farmville.
22:58 PAT: I sure have. Big Farmville guy, yeah.
23:01 AUSTIN: They… one of the developers that have access to this data, they built that game. So it was a very great game based on personalized data.
So it was a huge hit, right? It went viral. You probably got a ton of invites on it…
23:19 PAT: From Aunts who live out in the Midwest that I never or seen or talked to.
23:22 AUSTIN: So that’s a good outcome. Of private data usage. Because they build something, Facebook now has a great product. It’s keeping people on the website longer. The Daily Active User count is going up.
So hey, the privacy data thing, that worked out pretty well.
But unfortunately when you have very lax rules about who has access to data, the rest of the world gets involved. And it turns out the rest of the world doesn’t have as good intentions as the guys who built Farmville.
23:46 PAT: Right. That’s true. So I understand where you’re coming from on that. So I agree. I think that the intention behind the way that Facebook set everything up so that developers do have access to some data in order to be able to create better, curated, personalized content and apps. That’s all well and good.
But let’s just take a look at the way that technology is moving in general. We can take Facebook’s issues lately as a microcosm of what I believe to be a much bigger issue.
So let’s take… let’s pivot a little bit. Let’s go to self-driving cars. There’s going to be a benefit and there’s going to be a cost to that. So with the Facebook example that you outlined, the best-case scenario is that somebody creates a nice little app that you can waste a few hours on growing fake crops with non-robotic bees for the time being.
The worst case scenario is that we defraud an entire population of US voters.
Ok. Let’s go to self-driving cars. Best case scenario in my mind?
24:46 AUSTIN: I feel like the positive wasn’t as extreme as the negative. We gotta find a better one.
24:50 PAT: Okay. A better positive?
24:51 AUSTIN: What if personalized data found the cure for a disease?
24:56 PAT: Personalized data has helped people engage and find brands, businesses, causes and other people that they care about in life.
25:04 AUSTIN: I get your point. I was just yanking your chain.
25:07 PAT: Stop.
Going into the self-driving cars. The best-case scenario is that it works perfectly fine and it gets you to where you need to go, point A to point B with no injuries, no mishaps, nothing.
25:21 AUSTIN: And maybe you took a nap.
25:22 PAT: Maybe you take a nap. Maybe you pull a little bit of a… what is it? Incredibles? Where he fall asleep changes in the car? Self-driving car?
News came out last week… was it last week or the beginning of this week actually, that one of the self-driving cars actually hit and killed a pedestrian.
25:40 AUSTIN: And it belonged to Uber.
25:45 PAT: Perfect. Even more good PR for Uber.
25:48 AUSTIN: So a company that has absolutely been ruined by PR mishaps in the past year. Has yet another one.
And this is massive, because people have been very skeptical. I think that the first thought you have about a self-driving car is not necessarily the good side of it. It’s more so the scary side of it that I no longer have control of something that I’ve always had control over.
26:06 PAT: Yes. And I think that the unsettling part for people is that when you hear about a self-driving car, you always know that that’s an outside possibility. But it becomes a much more immediate fear when it’s been documented that the worst-case scenario has become a realization.
Hey, it’s the John Smoltz episode, let’s think about it this way. If you’re a baseball player and you’re a hitter, you know for a fact there is always the possibility that you could get hit by 100 mile an hour fastball in the ribcage. Most of the time that doesn’t happen. And you’re better for it, you’re fine.
When that does happen, that is the worst-case scenario. You can’t ever get over that mental block. It’s like getting the “Yips.”
And I think that what we’re going to start seeing with some of these… especially the AI heavy direction that technology has been moving… as some of those worst-case scenarios become realized for people, I think the usership is going to go down to an extent. Or level of interest is going to go down. Because the risk that people had assumed could be associated with it, now fully is associated with it because it’s been realized.
27:07 AUSTIN: It seems like the technology right now is still at a point where it shouldn’t be interacting with humans. Because it can’t be aware of all the elements. It has the ability, it seems, to notice cars in front of it. When to slow down, when to stop.
But it doesn’t have the ability to understand when a guy drops his cellphone when he’s crossing the street and has to turn around to go get it. And now he’s back in the line of sight of the car. And the car’s already moving,
So it doesn’t have that instinct that you or I might have when we’re driving our car. So the technology isn’t at a point where it should be interacting with us. Day-to-day life. And that’s what’s scary is the trial and error of artificial intelligence. That we’re living in that age right now.
We’re seeing this with private data usage, and now we’re talking about self-driving cars. And that side of it is extremely detrimental to society because we’re attempting to advance through machines, but these machines are not capable of living up to the way that we’ve always lived.
28:05 PAT: I think they’re not ready yet.
28:06 AUSTIN: Yeah, they’re just not ready.
28:07 PAT: A lot of this has been deployed to the general public…
28:09 AUSTIN: And it almost seems like society wants it to a degree, and that’s why they’re allowing access to our society.
28:16 PAT: Oh, if there was a self-driving car that could chaperone me anywhere, I’d be stoked.
28:19 AUSTIN: And I think that’s how it’s hitting the streets of San Francisco. The city, of course, has to allow that. Because otherwise it wouldn’t be happening. Uber has to get the okay from someone. You can’t just drive around a car with nobody in it.
So there is that desire for us to take the next step as a society. With technology. And things like self-driving cars are really an indication of that. So we’re really okay with it. And I think it’s slightly ignorant to think that as a society we’re there yet. Because if you think about it, let’s think about technology and how quickly it’s grown.
I think we were talking about this earlier. And that 40 years ago there was Pong. And now we’re sitting here and we have fully autonomous vehicles and cloud computing and block-chain technology and all these things we talked about.
That is massive. And then the Industrial Revolution only occurred in the beginning of the 20th century… late 19th, early 20th century. So a hundred years ago we had nothing and now today we’re trying to have technology that is smarter than us. That can act without us.
Don’t you think that’s a little bit too much too fast?
29:24 PAT: Oh, totally. I think that we’re getting well ahead of ourselves to an extent. That’s not to devalue any of the great and innovative work that these companies have done. Massive disclaimer, I’m a proponent of technological innovation.
29:34 AUSTIN: We work in tech.
29:35 PAT: We work in tech. I don’t want to sound like an old geezer that’s like, “Technology’s bad. I’d rather do it the old-fashioned way.”
But when there are real-life consequences that are proving to be extremely critical in people’s lives, I think that that raises the question, how widespread…? Why does the testing for these kind of things need to take place in the real world?
29:55 AUSTIN: And I think the A/B test needs to change. It doesn’t need to be a real world A/B testing. Think about it like this… when we do a CRO analysis through VWO, we do an A/B split test. We do not implement the tests.
They are just simply a mirage if you will, that can only be allowed for a controlled environment to have access to.
So we know who’s going to see it. We know how many people are going to see it. And then from there, we can make a decision.
30:21 PAT: So that we know who… how they’re going to respond to it? Just like we would in real-life.
30:25 AUSTIN: Right. And that doesn’t work perfectly with a self-driving car, but my point is that that’s a controlled environment. That’s not forever.
And when you bring a self-driving car into downtown San Francisco… I think that’s where it was. And then it crashes into a real human. That is forever. That’s a real thing. That was not an A/B test. That was not a split test. That was pure implementation that happened. That occurred. And now it’s forever.
30:48 PAT: Yeah. And I think for me… I think that a) we are getting ahead of ourselves in terms of how ready we think we are to integrate some of these into our day-to-day.
And I think the b) we are over-estimating by a large amount how much the net benefit can be. Because think about the person that’s riding passenger in that car now. Self-driving car hits somebody, kills them, that’s worst-case scenario. What was the best-case scenario?
You get to the exact same place that you could have easily driven just like you had since you were 15 and a half, or 19 if you were Austin.
31:19 AUSTIN: A little bit older.
31:22 PAT: But yeah, and I think that that’s kind of the issue. You take that as a microcosm of the bigger issue just like we’ve talked about, right? Moving into AI VR. The best case scenario? It’s great for interactivity in different environments. You can get a lot of education around different regions of the world. You can explore, you can be an adventurer…
31:46 AUSTIN: Experience something that you would never be able to experience.
31:48 PAT: Exactly. Do you think that that is causing people…? Do you think that in the future it has the potential to detach people from reality? What’s happening right now?
31:58 AUSTIN: Absolutely. I mean, if anything that’s happened so far in the 21st century has shown us that we’re detaching from reality as a society it’s that… Yes. The answer is yes. Cellphones have been our leading indicator that that is happening. And that’s not even the full immersion. Which VR is.
So yes. It is worrisome. Because it’s amazing to experience something that you never experienced. Like, I know that they’re doing for March Madness, you can strap on VR and then you’re there. You’re at the game. You’re probably watching the 16 seed beat the 1 seed.
32:28 PAT: I’d watch my racquet get busted…
32:31 AUSTIN: But what does that do to the mind? What does that do to us as humans? Cellphones being our only use case, we see that you tend to get more sucked into that environment that maybe you can control a little bit more. Or that seems to be a little bit more protective.
So we start reaching toward a point where we want to be protected. And I think with cellphones you can because you just cut out the point where something will happen in real time. So you can have a little bit more control over your response. A little bit more control of the person because they’re not there you and I.
So I think with VR that’s the direction we’re headed as well.
33:04 PAT: And that’s the appeal that people have. And we’re already… I did some research into this because this is actually a topic that I’m really passionate about. I was taking a look at just like technological uses and some of the… because a lot of the technology that we use nowadays hasn’t been around long enough for there to be any studies around long-term effects. But one thing that we do know for sure is that using you cellphone for social media purposes actually it releases dopamine.
It’s the same feeling as when you are happy. That’s like the chemical in the brain that gets released when you’re feeling the emotion happiness. You get that same chemical release when you go on social media and look at your phone.
And the reason for that is exactly what you were talking about. It’s a controlled environment. If there’s something negative there, you can say “See less posts like this.” And you never have to interact with that content ever again.
And the fact that we have a propensity to be attracted to that translates directly to the fact that we will have a propensity to overuse a piece of technology like VR to the point of detriment. And that’s the worrisome part. Because now you’re messing around with your actual biological chemical balance. You’re talking about like… there are famous entrepreneurs who say you shouldn’t check your phone in the morning or go on social media. Why?
Because you need to preserve that precious amount of dopamine that you have in your head for the day. And they’re done studies on themselves. They’ve identified it as something that can be worked on society wide. People like that gratification. They like seeing how popular they are.
Don’t you go on your Instagram in the morning to first off, yeah, see some pictures, but don’t you care more about seeing whether or not anybody tagged you in anything?
34:40 AUSTIN: Yeah. Sent you a message. Get tagged in a meme. All that good stuff.
34:42 PAT: Exactly.
34:43 AUSTIN: No, you’re completely right. And that’s become the social norm. So no one really questions that. And also, if you aren’t doing that, you might actually be seen as an outsider at this point. I think especially with where our generation is in this. We grew up with tech so it seems a bit more familiar.
Not so much Instagram, that was by the time we were in college. But it’s still very familiar. We’re young enough where we can interact with it and it feels right.
So yeah, it’s a bit terrifying to think. And I don’t know, it’s really hard to say the long-term effects right now, because it just hasn’t been around…
35:14 PAT: There are no long-term effects. It hasn’t even been around in the long-term.
35:16 AUSTIN: Yeah.
35:18 PAT: So I think that the way that I kind of want… so we’ve kind of been all over the place with these discussions. But it ties into that piece of “is technology good or bad?” And is the progressive nature of technology good or bad for humanity?
And I think that the answer is kind of two-fold. I think that it is good in a sense because it keeps us as humans moving forward and innovating. And doing all the things that have kept on this planet for so long.
But I do think that it comes with a detriment that a lot of people fail to understand. And once you understand the long-term consequences or your dependence on technology then… because if you think about it, it’s like you have these data breaches, where 50 million people’s worth of user data was hacked. If we didn’t have that dependence and propensity to use technology and social media and things like that as often, there wouldn’t be as much personal data to hack.
And I think that’s a clear cut detriment…
36:21 AUSTIN: And unfortunately there’s no going back now. So we look at this and all we can do at this point is learn from it and make better decisions as a society and then of course, as these tech companies that control a large part of our social interactions now. And what they do with our data… we seem to be more aware of the fact that it can go sideways very fast. So I don’t think any of us ever realized that that was occurring. I think as internet marketers we have access to this data so that we can serve people ads. So we have an understanding that it is there.
But the general population… this isn’t the front of your mind. When you login into Facebook you’re not thinking about where your private data is going. You just think you logged into Facebook.
So now we know that. Now we know, “Hey, this is happening. Our data’s being taken, it’s being used. It’s being repurposed for something else that I didn’t want to be a part of.”
So now we need to demand our social networking to have blocks in place so that our data isn’t being used wrong.
We need to be aware of what’s being served at us, so that we think about its source. Where’s it coming from? Is it validated by anything? What does this mean? What do I think of it?
And we need to have a conversation around it. I think as a society we need to be a little bit more open about what we’re reading and what we’re consuming because it turns out that this might be planted there based on just innocent interactions with each other.
So it’s important to have the conversation around where something is coming from.
37:41 PAT: Okay, so you touched on something that’s really interesting. So there’s widespread usage of platforms and technology that do consume your personal data. And that’s been documented well between Equifax, Facebook numerous other platforms that we shouldn’t even care to get into. That your data can easily be compromised.
And that has been widespread. People know that nationally and internationally now. Those two things kind of contradict one another in a sense, to me, because you would assume that if your data gets breached on a platform, you would probably not use it again. But what we’re seeing is that’s not the case. People need to use these platforms because they’re used to it. It’s a habitual thing.
38:23 AUSTIN: And I think that wat you touched upon with dopamine. That’s the reason why.
38:26 PAT: Yeah. It’s an addictive feeling.
38:27 AUSTIN: We humans created something virtual that now is so important in reality that we can no longer live without it. And my first reaction was not to go, “Well, I’m quitting Facebook.” That’s not what I said. I said, “Oh, we need to get smarter about how we’re using Facebook.” Because I’m not going to stop using Facebook. You’re not going to stop using Facebook. I don’t want to stop using Facebook.
So, we’ve reached a point where there’s no going back. And what does that mean for society? We don’t really know yet. But that’s just where we’re at.
38:53 PAT: Yeah. I think that we’re not going to see… in the long-term, we’re not going to see DAU count go down very much for these platforms. I do think that we’re going to see better security. But you can bet that we are going to see another breach at some point, probably in the near future. Though I would hate to see that.
Probably in the near future, we’re going to see another type of data breach because hackers are exactly as far ahead technologically as the good guy developers that are trying to protect you. So I’m honestly, kind of excited to talk about that on the show when it come up again.
39:21 AUSTIN: I just hope it’s not Amazon web-services. AWS they control a lot of the data on the internet.
39:25 PAT: If Amazon got hacked they would be… oh, man.
Yeah, Amazon did 43% of all online sales last year. Any transaction anywhere.
39:33 AUSTIN: E-commerce data, yes, but then how many companies who aren’t even in e-comm just B to B cloud storage? All that stuff is just data. It’s just data. Of every potential… every type of data you could possibly think of is fabricated onto just storage that Amazon has.
39:50 PAT: Imagine if Oracle got hacked.
39:52 AULSTIN: Oracle. Another great example. Imagine if Oracle got hacked. Wheat would that mean for society? Since they have such minute data as what tax bracket you’re in. And then what your Social Security Number is. It’s on there too.
So very worrisome, but also exciting time. I think that as we said, we’re probably going to be discussing this a lot more in the future. But hey, get out there and have the right conversations. Definitely pay attention to what you’re being served. It is very important as people who study people’s data to serve you advertisements.
And then, of course, from an organic search perspective I am constantly looking at the intent of someone’s search. It’s very important for you to be aware of what you’re being served.
So that’s the 2 cents I have.
40:36 PAT: All right, you guys. Thank you so much for joining us for episode 29 of Flip the Switch podcast presented by Power Digital Marketing. We will be back again next week with some more great content for you guys. Following up on some of the business news and trends that we talked about today.
But until that time, feel free to hit us up on our socials. That’s @flipswitchcast on both Instagram and Twitter. This has been Pat Kriedler, Austin Mahaffy, and Joe Hollerup signing off.