FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Ad Revenue On ExTwitter Still In Free Fall In Second Year Of Elon’s Reign

Turns out that when you tell advertisers to go fuck themselves, sue the advertisers who did so, and then promise you won’t do anything to stop the worst people in the world from spewing hate and bigotry on your platform, it might not be great for business.

Who knew? Elon, apparently.

Last week we noted that ad execs were saying that Elon’s latest antics were only making them even less interested in advertising on ExTwitter, but there hasn’t been as much talk lately about the financial situation the company is in.

In the first year after Elon took over, there were a number of reports suggesting ad revenue dropped somewhere between 50% and 70%. Elon has admitted that the company’s overall valuation of the company is probably down by nearly 60%.

But most of that was all talking about where it was in that first year post Elon. Since then, there’s been little data on how things were actually going. Linda Yaccarino has insisted that many of the advertisers who left came back, though when people looked at the details, it looked like a few that had come back only dipped their toes in the ExTwitter waters, rather than fully coming back.

And indeed, all we’ve been hearing this year is that Musk and Yaccarino are trying to woo back advertisers. Again. And again. Though, suing them isn’t doing them any favors.

However, buried in a recent Fortune article is the first time I’ve seen any data showing how badly the second year of Elon has gone. While the main focus of the article is on how Elon may have to sell some more of his Tesla stock to fund ExTwitter, it notes that ad revenue has continued to drop and was 53% lower than it was in 2023 (i.e., already after Elon had taken over, and many advertisers had bailed).

And the article says that ad revenue is down an astounding 84% from when Elon took over, based on an analysis by Bradford Ferguson, the chief investment officer at an asset management firm:

Ferguson based his assessment on internal second-quarter figures recently obtained by the New York Times. According to this report, X booked $114 million worth of revenue in the U.S., its largest market by far. This represented a 25% drop over the preceding three months and a 53% drop over the year-ago period.

That already sounds bad. But it gets worse. The last publicly available figures prior to Musk’s acquisition, from Q2 of 2022, had revenue at $661 million. After you account for inflation, revenue has actually collapsed by 84%, in today’s dollars.

Ouch.

A separate report from Quartz (pulling from MediaRadar research) suggests the numbers aren’t quite that dire, but they still see a 24% decline in 2024 compared to 2023. And when the 24% decline is the better report, you know you’re in serious trouble.

Advertisers apparently spent almost $744 million on X, formerly known as Twitter, during the first six months of 2024. That’s about 24% lower than the more than $982 million advertisers dropped on the platform in the first half of 2023, according to ad-tracking company MediaRadar.

No matter how you look at it, it appears that in the second year of Elon’s control, advertising revenue remains in free fall.

No wonder he’s resorted to suing. Platforming more awful people and undermining each deal that Yaccarino brings in hasn’t magically helped turn things around.

Anyway, for no reason at all, I’ll just remind people that Elon’s pitch to investors to help fund some of the $44 billion takeover of Twitter was that he would increase revenue to $26.4 billion by 2028. And, yes, the plan was to diversify that revenue, but his pitch deck said that ad revenue would generate $12 billion by 2028. This would mean basically doubling the ~$6 billion in ad revenue the company was making at the time Elon purchased it. But now that’s been cut to maybe $1.5 billion and probably less.

I’m guessing that Elon and Linda might fall a wee bit short of their target here.

Jim Jordan Demands Advertisers Explain Why They Don’t Advertise On MAGA Media Sites

Remember last month when ExTwitter excitedly “rejoined GARM” (the Global Alliance for Responsible Media, an advertising consortium focused on brand safety)? And then, a week later, after Rep. Jim Jordan released a misleading report about GARM, Elon Musk said he was going to sue GARM and hoped criminal investigations would be opened?

Unsurprisingly, Jordan has now ratcheted things up a notch by sending investigative demands to a long list of top advertisers associated with GARM. The letter effectively accuses these advertisers of antitrust violations for choosing not to advertise on conservative media sites, based on GARM’s recommendations on how to best protect brand safety.

The link there shows all the letters, but we’ll just stick with the first one, to Adidas. The letter doesn’t make any demands specifically about ExTwitter, but does name the GOP’s favorite media sites, and demands to know whether any of these advertisers agreed not to advertise on those properties. In short, this is an elected official demanding to know why a private company chose not to give money to media sites that support that elected official:

Was Adidas Group aware of the coordinated actions taken by GARM toward news outlets and podcasts such as The Joe Rogan Experience, The Daily Wire, Breitbart News, or Fox News, or other conservative media? Does Adidas Group support GARM’s coordinated actions toward these news outlets and podcasts?

Jordan is also demanding all sorts of documents and answers to questions. He is suggesting strongly that GARM’s actions (presenting ways that advertisers might avoid, say, having their brands show up next to neo-Nazi content) were a violation of antitrust law.

This is all nonsense. First of all, choosing not to advertise somewhere is protected by the First Amendment. And there are good fucking reasons not to advertise on media properties most closely associated with nonsense peddling, extremist culture wars, and just general stupidity.

Even more ridiculous is that the letter cites NAACP v. Claiborne Hardware, which is literally the Supreme Court case that establishes that group boycotts are protected speech. It’s the case that says not supporting a business for the purpose of protest, while economic activity, is still protected speech and can’t be regulated by the government (and it’s arguable that what does GARM does is even a boycott at all).

As the Court noted, in holding that organizing a boycott was protected by the First Amendment:

The First Amendment similarly restricts the ability of the State to impose liability on an individual solely because of his association with another.

But, of course, one person who is quite excited is Elon Musk. He quote tweeted (they’re still tweets, right?) the House Judiciary’s announcement of the demands with a popcorn emoji:

Image

So, yeah. Mr. “Free Speech Absolutist,” who claims the Twitter files show unfair attempts by governments to influence speech, now supports the government trying to pressure brands into advertising on certain media properties. It’s funny how the “free speech absolutist” keeps throwing the basic, fundamental principles of free speech out the window the second he doesn’t like the results.

That’s not supporting free speech at all. But, then again, for Elon to support free speech, he’d first have to learn what it means, and he’s shown no inclination of ever doing that.

EA has teams looking at "thoughtful implementations" of ads "inside its game experiences"

EA CEO Andrew Wilson has revealed plans to "harness the power" of its game communities and use advertising as a "meaningful driver of growth" for the company.

In EA's latest financial report, Wilson fielded comments from investors, advising that whilst the megacorp needs to be "very thoughtful about advertising", given "many, many billions of hours" are spent on its games, "advertising has an opportunity to be a meaningful driver of growth for [EA]".

"To answer your question on advertising broadly, again, I think it's still early on that front," Wilson said. "And we have looked over the course of our history to be very thoughtful about advertising in the context of our play experiences.

Read more

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

Hyundai suspends advertising on Twitter after Nazi material runs with ads

Twitter Profits From Hate Speech Tweet

NBC News reports that carmaker Hyundai has paused its ads on Twitter, citing the presence of neo-Nazi material alongside its own posts.

"We have paused our ads on X and are speaking to X directly about brand safety to ensure this issue is addressed," Hyundai said in the statement. 

Read the rest

The post Hyundai suspends advertising on Twitter after Nazi material runs with ads appeared first on Boing Boing.

Congressional Testimony On Section 230 Was So Wrong That It Should Be Struck From The Record

A few months ago, we wondered if Wired had fired its entire fact-checking staff because it published what appeared to be a facts-optional article co-authored by professional consistently wrong Jaron Lanier and an academic I’d not come across before, Allison Stanger. The article suggested that getting rid of Section 230 “could save everything.” Yet the article was so far off-base that it was in the “not even wrong” category of wrongness.

I’m not going to review all the reasons it was wrong. You can go back to my original article for that, though I will note that the argument seemed to suggest that getting rid of Section 230 would both lead to better content moderation and, at the same time, only moderation based on the First Amendment. Both of those points are obviously wrong, but the latter one is incoherent.

Given his long track record of wrongness, I had assumed that much of the article likely came from Lanier. However, I’m going to reassess that in light of Stanger’s recent performance before the House Energy & Commerce Committee. Last week, there was this weird hearing about Section 230, in which the Committee invited three academic critics of Section 230, and not a single person who could counter their arguments and falsehoods. We talked about this hearing a bit in this week’s podcast, with Rebecca MacKinnon from the Wikimedia Foundation.

Stanger was one of the three witnesses. The other two, Mary Anne Franks and Mary Graw Leary, presented some misleading and confused nonsense about Section 230. However, the misleading and confused nonsense about Section 230 at least fits into the normal framework of the debate around Section 230. There is confusion about how (c)(1) and (c)(2) interact, the purpose of Section 230, and (especially) some confusion about CSAM and Section 230 and an apparent unawareness that federal criminal behavior is exempted from Section 230.

But, let’s leave that aside. Because Stanger’s submission was so far off the mark that whoever invited her should be embarrassed. I’ve seen some people testify before Congress without knowing what they’re talking about, but I cannot recall seeing testimony this completely, bafflingly wrong before. Her submitted testimony is wrong in all the ways that the Wired article was wrong and more. There are just blatant factual errors throughout it.

It is impossible to cover all of the nonsense, so we’re just going to pick some gems.

Without Section 230, existing large social media companies would have to adapt. Decentralized Autonomous Organizations, (DAOs) such as BlueSky and Mastodon, would become more attractive. The emergent DAO social media landscape should serve to put further brakes on virality, allowing a more regional social media ecosystem to emerge, thereby creating new demand for local media. In an ideal world, networks of DAOs would comprise a new fediverse (a collection of social networking servers which can communicate with each other, while remaining independently controlled), where users would have greater choice and control over the communities of which they are a part.

So, um. That’s not what DAOs are, professor. You seem to be confusing decentralized social media with decentralized autonomous organizations, which are a wholly different thing. This is kind of like saying “social security benefits” when you mean “social media influencers” because both begin with “social.” They’re not the same thing.

A decentralized social media site is what it says on the tin. It’s a type of social media that isn’t wholly controlled by a single company. Different bits of it can be controlled by others, whether its users or alternative third-party providers. A DAO is an operation, often using mechanisms like cryptocurrency and tokens, to enable a kind of democratic voting, or (possibly) a set of smart contracts, that determine how the loosely defined organization is run. They are not the same.

In theory, a decentralized social media site could be run by a DAO, but I don’t know of any that currently are.

Also, um, decentralized social media can only really exist because of Section 230. “Without Section 230,” you wouldn’t have Bluesky or Mastodon, because they would face ruinous litigation for hosting content that people would sue over. So, no, you would not have either more decentralized social media (which I think is what you meant) or DAOs (which are wholly unrelated). You’d have a lot less, because hosting third-party speech would come with way more liability risk.

Also, there’s nothing inherent to decentralized social media that means you’d “put the brakes on virality.” Mastodon has developed to date in a manner designed to tamp down virality, but Bluesky hasn’t? Nor have other decentralized social media offerings, many of which hope to serve a global conversation where virality is a part of it. And that wouldn’t really change with or without Section 230. Mastodon made that decision because of the types of communities it wanted to foster. And, indeed, its ability to do that is, in part, due to intermediary liability protections like Section 230, that enable the kind of small, more focused community moderation Mastodon embraces already.

It’s really not clear to me that Professor Stanger even knows what Section 230 does.

Non-profits like Wikipedia are concerned that their enterprise could be shut down through gratuitous defamation lawsuits that would bleed them dry until they ceased to exist (such as what happened with Gawker). I am not convinced this is a danger for Wikipedia, since their editing is done by humans who have first amendment rights, and their product is not fodder for virality….

Again, wut? The fact that their editing is “done by humans” has literally no impact on anything here. Why even mention that? Humans get sued for defamation all the time. And, if they’re more likely to get sued for defamation, they’re less likely to even want to edit at all.

And people get mad about their Wikipedia articles all the time, and sometimes they sue over them. Section 230 gets those lawsuits thrown out. Without it, those lawsuits would last longer and be more expensive.

Again, it’s not at all clear if Prof. Stanger even knows what Section 230 is or how it works.

The Facebook Files show that Meta knew that its engagement algorithms had adverse effects on the mental health of teenage girls, yet it has done nothing notable to combat those unintended consequences. Instead, Meta’s lawyers have invoked Section 230 in lawsuits to defend itself against efforts to hold it liable for serious harms

Again, this is just wrong. What the crux of the Facebook Files showed was that Meta was, in fact, doing research to learn about where its algorithms might cause harm in order to try to minimize that harm. However, because of some bad reporting, it now means that companies will be less likely to even do that research, because people like Professor Stanger will misrepresent it, claiming that they did nothing to try to limit the harms. This is just outright false information.

Also, the cases where Meta has invoked Section 230 would be unrelated to the issue being discussed here because 230 is about not being held liable for user content.

The online world brought to life by Section 230 now dehumanizes us by highlighting our own insignificance. Social media and cancel culture make us feel small and vulnerable, where human beings crave feeling large and living lives of meaning, which cannot blossom without a felt sense of personal agency that our laws and institutions are designed to protect. While book publishers today celebrate the creative contributions of their authors, for-profit Internet platforms do not.

I honestly have no idea what’s being said here. “Dehumanizes us by highlighting our own insignificance?” What are you even talking about? People were a lot more “insignificant” pre-internet, when they had no way to speak out. And what does “cancel culture” have to do with literally any of this?

Without Section 230, companies would be liable for the content on their platforms. This would result in an explosion of lawsuits and greater caution in such content moderation, although companies would have to balance such with first amendment rights. Think of all the human jobs that could be generated!

Full employment for tort lawyers! I mean, this is just a modern version of Bastiat’s broken window fallacy. Think of all the economic activity if we just break all the windows in our village!

Again and again, it becomes clear that Stanger has no clue how any of this works. She does not understand Section 230. She does not understand the internet. She does not understand the First Amendment. And she does not understand content moderation. It’s a hell of a thing, considering she is testifying about Section 230 and its impact on social media and the First Amendment.

At a stroke, content moderation for companies would be a vastly simpler proposition. They need only uphold the First Amendment, and the Courts would develop the jurisprudence to help them do that, rather than to put the onus of moderation entirely on companies.

That is… not at all how it would work. They don’t just need to “uphold the First Amendment” (which is not a thing that companies can even do). The First Amendment’s only role is in restricting the government, not companies, from passing laws that infringe on a person’s ability to express themselves.

Instead, as has been detailed repeatedly, companies would face the so-called “moderator’s dilemma.” Because the First Amendment requires distributors to have actual knowledge of content violating the law to be liable, a world without Section 230 would incentivize one of two things, neither of which is “upholding the First Amendment.” They would either let everything go and do as little moderation as possible (so as to avoid the requisite knowledge), or they’d become very aggressive in limiting and removing content to avoid liability (even though this wouldn’t work and they’d still get hit with tons of lawsuits).

We’ve been here before. When government said the American public owned the airwaves, so television broadcasting would be regulated, they put in place regulations that supported the common good. The Internet affects everyone, and our public square is now virtual, so we must put in place measures to ensure that our digital age public dialogue includes everyone. In the television era, the fairness doctrine laid that groundwork. A new lens needs to be developed for the Internet age.

Except, no, that’s just factually wrong. The only reason that the government was able to put regulations on broadcast television was because the government controlled the public spectrum which they licensed to the broadcasters. The Supreme Court made clear in Red Lion that without that, they could not hinder the speech of media companies. So, the idea that you can just apply similar regulations to the internet is just fundamentally clueless. The internet is not publicly owned spectrum licensed to anyone.

While Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies, they are not. Unlike Ma Bell, they curate the content they transmit to users

Again, it appears the Professor is wholly unaware of Section 230 and how it works. The authors of Section 230 made it clear over and over again that they wrote 230 to be the opposite of common carriers. No one who supports Section 230 thinks it makes platforms into common carriers, because it does not. The entire point was to free up companies to choose how to curate content, so as to allow those companies to craft the kinds of communities they wanted. They only people claiming the “illusion” of common carrierness are those who are trying to destroy Section 230.

So there is no “illusion” here, unless you don’t understand what you’re talking about.

The repeal of Section 230 would also be a step in the right direction in addressing what are presently severe power imbalances between government and corporate power in shaping democratic life. It would also shine a spotlight on a globally disturbing fact: the overwhelming majority of global social media is currently in the hands of one man (Mark Zuckerberg), while nearly half the people on earth have a Meta account. How can that be a good thing under any scenario for the free exchange of ideas?

I mean, we agree that it’s bad that Meta is so big. But if you remove Section 230 (as Meta itself has advocated for!), you help Meta get bigger and harm the competition. Meta has a building full of lawyers. They can handle the onslaught of lawsuits that this would bring (as Stanger herself gleefully cheers on). It’s everyone else, the smaller sites, such as the decentralized players (not DAOs) who would get destroyed.

Mastodon admins aren’t going to be able to afford to pay to defend the lawsuits. Bluesky doesn’t have a building full of lawyers. The big winner here would be Meta. The cost to Meta of removing Section 230 is minimal. The cost to everyone trying to eat away at Meta’s marketshare would be massive.

The new speech is governed by the allocation of virality in our virtual public square. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.

I mean, this is just ahistorical nonsense. Historically, most people had no way to get their message out at all. You could talk to your friends, family, co-workers, and neighbors, and that was about it. If you wanted to reach beyond that small group, you required some large gatekeeper (a publisher, a TV or radio producer, a newspaper) to grant you access, which they refused for the vast majority of people.

The internet flipped all that on its head, allowing anyone to effectively speak to anyone. The reason we have algorithms is not “Section 230” and the algorithms aren’t “setting the volume,” they came in to deal with the simple fact that there’s just too much information, and it was flooding the zone. People wanted to find information that was more relevant to them, and with the amount of content available online, the only way to manage that was with some sort of algorithm.

But, again, the rise of algorithms is not a Section 230 issue, even though Stanger seems to think it is.

Getting rid of the liability shield for all countries operating in the United States would have largely unacknowledged positive implications for national security, as well as the profit margins for US-headquartered companies. Foreign electoral interference is not in the interests of democratic stability, precisely because our enemies benefit from dividing us rather than uniting us. All foreign in origin content could therefore be policed at a higher standard, without violating the first amendment or the privacy rights of US citizens. As the National Security Agency likes to emphasize, the fourth amendment does not apply to foreigners and that has been a driver of surveillance protocols since the birth of the Internet. It is probable that the Supreme Court’s developing first amendment jurisprudence for social media in a post-230 world would embrace the same distinction. At a stroke, the digital fentanyl that TikTok represents in its American version could easily be shut down, and we could through a process of public deliberation leading to new statutory law collectively insist on the same optimization targets for well-being, test scores, and time on the platform that Chinese citizens currently enjoy in the Chinese version of TikTok (Douyin)

Again, this is a word salad that is mostly meaningless.

First of all, none of this has anything to do with Section 230, but rather the First Amendment. And it’s already been noted, clearly, that the First Amendment protects American users of foreign apps.

No one is saying “you can’t ban TikTok because of 230,” they’re saying “you can’t ban TikTok because of the First Amendment.” The Supreme Court isn’t going to magically reinvent long-standing First Amendment doctrine because 230 is repealed. This is nonsense.

And, we were just discussing what utter nonsense it is to claim that TikTok is “digital fentanyl” so I won’t even bother repeating that.

There might also be financial and innovation advantages for American companies with this simple legislative act. Any commercial losses for American companies from additional content moderation burdens would be offset by reputational gains and a rule imposed from without on what constitutes constitutionally acceptable content. Foreign electoral interference through misinformation and manipulation could be shut down as subversive activity directed at the Constitution of the United States, not a particular political party.

This part is particularly frustrating. This is why internet companies already moderate. Stanger’s piece repeatedly seems to complain both about too little moderation (electoral interference! Alex Jones!) and too much moderation (algorithms! dastardly Zuck deciding what I can read!).

She doesn’t even seem to realize that her argument is self-contradictory.

But, here, the supposed “financial and innovation advantages” from American companies being able to get “reputational gains” by stopping “misinformation” already exists. And it only exists because of Section 230. Which Professor Stanger is saying we need to remove to get the very thing it enables, and which would be taken away if it were repealed.

This whole thing makes me want to bang my head on my desk repeatedly.

Companies moderate today to (1) make users’ experience better and (2) to make advertisers happier that they’re not facing brand risk from having ads appear next to awful content. The companies that do better already achieve that “reputational benefit,” and they can do that kind of moderation because they know Section 230 prevents costly, wasteful, vexatious litigation from getting too far.

If you remove Section 230, that goes away. As discussed above, companies then are much more limited in the kinds of moderation they can do, which means users have a worse experience and advertisers have a worse experience, leading to reputational harm.

Today, companies already try to remove or diminish the power of electoral interference. That’s a giant part of trust & safety teams’ efforts. But they can really only do it safely because of 230.

The attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.

I’ve read this paragraph multiple times, and I still don’t know what it’s saying. Section 230 does not lead to an “attention-grooming model.” That’s just how society works. And, then, when she says society must seek quality as a whole, given how many people are online, the only way to do that is with algorithms trying to make some sort of call on what is, and what is not, quality.

That’s how this works.

Does she imagine that without Section 230, algorithms will go away, but good quality content will magically rise up? Because that’s not how any of this actually works.

Again, there’s much more in her written testimony, and none of it makes any sense at all.

Her spoken testimony was just as bad. Rep. Bob Latta asked her about the national security claims (some of which were quoted above) and we got this word salad, none of which has anything to do with Section 230:

I think it’s important to realize that our internet is precisely unique because it’s so open and that makes it uniquely vulnerable to all sorts of cyber attacks. Just this week, we saw an extraordinarily complicated plot that is most likely done by China, Russia or North Korea that could have blown up the internet as we know it. If you want to look up XZ Utils, Google that and you’ll find all kinds of details. They’re still sorting out what the intention was. It’s extraordinarily sophisticated though, so I think that the idea that we have a Chinese company where data on American children is being stored and potentially utilized in China, can be used to influence our children. It can be used in any number of ways no matter what they tell you. So I very much support and applaud the legislation to repeal, not to repeal, but to end TikToks operations in the United States.

The national security implications are extraordinary. Where the data is stored is so important and how it can be used to manipulate and influence us is so important. And I think the next frontier that I’ll conclude with this, for warfare, is in cyberspace. It’s where weak countries have huge advantages. They can pour resources into hackers who could really blow up our infrastructure, our hospitals, our universities. They’re even trying to get, as you know, into the House. This House right here. So I think repealing Section 230 is connected to addressing a host of potential harms

Nothing mentioned in there — from supply chain attacks like xz utils, to a potential TikTok ban, to hackers breaking into hospitals — has anything whatsoever to do with Section 230. She just throws it in at the end as if they’re connected.

She also claimed that Eric Schmidt has come out in favor of “repealing Section 230,” which was news to me. It also appears to be absolutely false. I went and looked, and the only thing I can find is a Digiday article which claims he called for reforms (not a repeal). The article never actually quotes him saying anything related to Section 230 at all, so it’s unclear what (if anything) he actually said. Literally the only quotes from Schmidt are old man stuff about how the kids these days just need to learn how to put down their phones, and then something weird about the fairness doctrine. Not 230.

Later, in the hearing, she was asked about the impact on smaller companies (some of which I mentioned above) and again demonstrates a near total ignorance of how this all works:

There is some concern, it’s sometimes expressed from small businesses that they are going to be the subject of frivolous lawsuits, defamation lawsuits, and they can be sued out of business even though they’ve defamed no one. I’m less concerned about that because if we were to repeal section (c)(1) of Section 230 of those 26 words, I think the First Amendment would govern and we would develop the jurisprudence to deal with small business in a more refined way. I think if anything, small businesses are in a better position to control and oversee what’s on their platforms than these monolithic large companies we have today. So with a bit of caution, I think that could be addressed.

The First Amendment always governs. But Section 230 is the “more refined way” that we’ve developed to help protect small businesses. The main function of Section 230 is to get cases, that would be long and costly if you had to defend them under the First Amendment, tossed out much earlier at the motion to dismiss stage. Literally that’s Section 230’s main purpose.

If you had to fight it out under the First Amendment, you’re talking about hundreds of thousands of dollars and a much longer case. And that cost is going to lead companies to (1) refuse to host lots of protected content, because it’s not worth the hassle, and (2) be much more open to pulling down any content that anyone complains about.

This is not speculative. There have been studies on this. Weaker intermediary laws always lead to massive overblocking. If Stanger had done her research, or even understood any of this, she would know this.

So why is she the one testifying before Congress?

I’ll just conclude with this banger, which was her final statement to Congress:

I just want to maybe take you back to the first part of your question to explain that, which I thought was a good one, which is that we have a long history of First Amendment jurisprudence in this country that in effect has been stopped by Section 230. In other words, if you review, if you remove (c)(1), that First Amendment jurisprudence will develop to determine when it is crime fire in a crowded theater, whether there’s defamation, whether there’s libel. We believe in free speech in this country, but even the First Amendment has some limits put on it and those could apply to the platforms. We have a strange situation right now if we take that issue of fentanyl that we were discussing earlier, what we have right now is essentially a system where we can go after the users, we can go after the dealers, but we can’t go after the mules. And I think that’s very problematic. We should hold the mules liable. They’re part of the system.

Yeah. So. She actually went to the whole fire in a crowded theater thing. This is the dead-on giveaway that the person speaking has no clue about the First Amendment. That’s dicta from a case from over 100 years ago, in a case that is no longer considered good law, and hasn’t been in decades. Even worse, that dicta came in a case about jailing war protestors.

She also trots out yet another of Ken “Popehat” White’s (an actual First Amendment expert) most annoying tropes about people opining on the First Amendment without understanding it: because the First Amendment has some limits, this new limit must be okay. That’s not how it works. As Ken and others have pointed out, the exceptions to the First Amendment are an established, known, and almost certainly closed set.

The Supreme Court has no interest in expanding that set. It refused to do so for animal crush videos, so it’s not going to magically do it for whatever awful speech you think it should limit.

Anyway, it was a shame that Congress chose to hold a hearing on Section 230 and only bring in witnesses who hate Section 230. Not a single witness who could explain why Section 230 is so important was brought in. But, even worse, they gave one of the three witness spots to someone who was spewing word salad level nonsense, that didn’t make any sense at all, was often factually incorrect (in hilariously embarrassing ways), and seemed wholly unaware of how any relevant thing worked.

Do better, Congress.

Government Is Snooping on Your Phone

John Stossel holds a cellphone in front of an enlarged smart phone screen | Stossel TV

The government and private companies spy on us.

My former employee, Naomi Brockwell, has become a privacy specialist. She advises people on how to protect their privacy.

In my new video, she tells me I should delete most of my apps on my phone.

I push back. I like that Google knows where I am and can recommend a "restaurant near me." I like that my Shell app lets me buy gas (almost) without getting out of the car.

I don't like that government gathers information about me via my phone, but so far, so what?

Brockwell tells me I'm being dumb because I don't know which government will get that data in the future.

Looking at my phone, she tells me, "You've given location permission, microphone permission. You have so many apps!"

She says I should delete most of them, starting with Google Chrome.

"This is a terrible app for privacy. Google Chrome is notorious for collecting every single thing that they can about you…[and] broadcasting that to thousands of people…auctioning off your eyeballs. It's not just advertisers collecting this information. Thousands of shell companies, shady companies of data brokers also collect it and in turn sell it."

Instead of Google, she recommends using a browser called Brave. It's just as good, she says, but it doesn't collect all the information that Chrome does. It's slightly faster, too, because it doesn't slow down to load ads.

Then she says, "Delete Google Maps."

"But I need Google Maps!"

"You don't." She replies, "You have an iPhone. You have Apple Maps…. Apple is better when it comes to privacy…. Apple at least tries to anonymize your data."

Instead of Gmail, she recommends more private alternatives, like Proton Mail or Tuta.

"There are many others." She points out, "The difference between them is that every email going into your inbox for Gmail is being analyzed, scanned, it's being added to a profile about you."

But I don't care. Nothing beats Google's convenience. It remembers my credit cards and passwords. It fills things in automatically. I tried Brave browser but, after a week, switched back to Google. I like that Google knows me.

Brockwell says that I could import my credit cards and passwords to Brave and autofill there, too.

"I do understand the trade-off," she adds. "But email is so personal. It's private correspondence about everything in your life. I think we should use companies that don't read our emails. Using those services is also a vote for privacy, giving a market signal that we think privacy is important. That's the only way we're going to get more privacy."

She also warns that even apps like WhatsApp, which I thought were private, aren't as private as we think.

"WhatsApp is end-to-end encrypted and better than standard SMS. But it collects a lot of data about you and shares it with its parent company, Facebook. It's nowhere near as private as an app like Signal."

She notices my Shell app and suggests I delete it.

Opening the app's "privacy nutrition label," something I never bother reading, she points out that I give Shell "your purchase history, your contact information, physical address, email address, your name, phone number, your product interaction, purchase history, search history, user ID, product interaction, crash data, performance data, precise location, course location."

The list goes on. No wonder I don't read it.

She says, "The first step before downloading an app, take a look at their permissions, see what information they're collecting."

I'm just not going to bother.

But she did convince me to delete some apps, pointing out that if I want the app later, I can always reinstall it.

"We think that we need an app for every interaction we do with a business. We don't realize what we give up as a result."

"They already have all my data. What's the point of going private now?" I ask.

"Privacy comes down to choice," She replies. "It's not that I want everything that I do to remain private. It's that I deserve to have the right to selectively reveal to the world what I want them to see. Currently, that's not the world."

COPYRIGHT 2023 BY JFS PRODUCTIONS INC.

The post Government Is Snooping on Your Phone appeared first on Reason.com.

❌