On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.
"Generative AI is ripping the humanity out of things," Procreate wrote on its website. "Built on a foundation of theft, the technology is steering us toward a barren future."
In a video posted on X, Procreate CEO James Cuda laid out his company's stance, saying, "We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists."
If good intentions created good laws, there would be no need for congressional debate.
I have no doubt the authors of this bill genuinely want to protect children, but the bill they've written promises to be a Pandora's box of unintended consequences.
The Kids Online Safety Act, known as KOSA, would impose an unprecedented duty of care on internet platforms to mitigate certain harms associated with mental health, such as anxiety, depression, and eating disorders.
While proponents of the bill claim that the bill is not designed to regulate content, imposing a duty of care on internet platforms associated with mental health can only lead to one outcome: the stifling of First Amendment–protected speech.
Today's children live in a world far different from the one I grew up in and I'm the first in line to tell kids to go outside and "touch grass."
With the internet, today's children have the world at their fingertips. That can be a good thing—just about any question can be answered by finding a scholarly article or how-to video with a simple search.
While doctors' and therapists' offices close at night and on weekends, support groups are available 24 hours a day, 7 days a week, for people who share similar concerns or have had the same health problems. People can connect, share information, and help each other more easily than ever before. That is the beauty of technological progress.
But the world can also be an ugly place. Like any other tool, the internet can be misused, and parents must be vigilant in protecting their kids online.
It is perhaps understandable that those in the Senate might seek a government solution to protect children from any harms that may result from spending too much time on the internet. But before we impose a drastic, first-of-its-kind legal duty on online platforms, we should ensure that the positive aspects of the internet are preserved. That means we have to ensure that First Amendment rights are protected and that these platforms are provided with clear rules so that they can comply with the law.
Unfortunately, this bill fails to do that in almost every respect.
As currently written, the bill is far too vague, and many of its key provisions are completely undefined.
The bill effectively empowers the Federal Trade Commission (FTC) to regulate content that might affect mental health, yet KOSA does not explicitly define the term "mental health disorder." Instead, it references the fifth edition of the Diagnostic and Statistical Manual of Mental Health Disorders…or "the most current successor edition."
Written that way, not only would someone looking at the law not know what the definition is, but even more concerning, the definition could change without any input from Congress whatsoever.
The scope of one of the most expansive pieces of federal tech legislation could drastically change overnight, and Congress may not even realize it until after it already happened. None of the people's representatives should be comfortable with a definition that effectively delegates Congress's legislative authority to an unaccountable third party.
Second, the bill would impose an unprecedented duty of care on internet platforms to mitigate certain harms, such as anxiety, depression, and eating disorders. But the legislation does not define what is considered harmful to minors, and everyone will have a different belief as to what causes harm, much less how online platforms should go about protecting minors from that harm.
The sponsors of this bill will tell you that they have no desire to regulate content. But the requirement that platforms mitigate undefined harms belies the bill's effect to regulate online content. Imposing a "duty of care" on online platforms to mitigate harms associated with mental health can only lead to one outcome: the stifling of constitutionally protected speech.
For example, if an online service uses infinite scrolling to promote Shakespeare's works, or algebra problems, or the history of the Roman Empire, would any lawmaker consider that harmful?
I doubt it. And that is because website design does not cause harm. It is content, not design, that this bill will regulate.
The world's most well-known climate activist, Greta Thunberg, famously suffers from climate anxiety. Should platforms stop her from seeing climate-related content because of that?
Under this bill, Greta Thunberg would have been considered a minor and she could have been deprived from engaging online in the debates that made her famous.
Anxiety and eating disorders are two of the undefined harms that this bill expects internet platforms to prevent and mitigate. Are those sites going to allow discussion and debate about the climate? Are they even going to allow discussion about a person's story overcoming an eating disorder? No. Instead, they are going to censor themselves, and users, rather than risk liability.
Would pictures of thin models be tolerated, lest it result in eating disorders for people who see them? What about violent images from war? Should we silence discussions about gun rights because it might cause some people anxiety?
What of online discussion of sexuality? Would pro-gay or anti-gay discussion cause anxiety in teenagers?
What about pro-life messaging? Could pro-life discussions cause anxiety in teenage mothers considering abortion?
In truth, this bill opens the door to nearly limitless content regulation, as people can and will argue that almost any piece of content could contribute to some form of mental health disorder.
In addition, financial concerns may cause online forums to eliminate anxiety-inducing content for all users, regardless of age, if the expense of policing teenage users is prohibitive.
This bill does not merely regulate the internet; it threatens to silence important and diverse discussions that are essential to a free society.
And who is empowered to help make these decisions? That task is entrusted to a newly established speech police. This bill would create a Kids Online Safety Council to help the government decide what constitutes harm to minors and what platforms should have to do to address that harm. These are the types of decisions that should be made by parents and families, not unelected bureaucrats serving as a Censorship Committee.
Those are not the only deficiencies of this bill. The bill seeks to protect minors from beer and gambling ads on certain online platforms, such as Facebook or Hulu. But if those same minors watch the Super Bowl or the PGA tour on TV, they would see those exact same ads.
Does that make any sense? Should we prevent online platforms from showing kids the same content they can and do see on TV every day? Should sports viewership be effectively relegated to the pre-internet age?
And even if it were possible to shield minors from every piece of content that might cause anxiety, depression, or eating disorders, that is still not enough to comply with the KOSA. That is because KOSA requires websites to treat differently individuals that the platform knows or should know are minors.
That means that media platforms who earnestly try to comply with the law could be punished because the government thinks it "should" have known a user was a minor.
This bill, then, does not just apply to minors. A should-have-known standard means that KOSA is an internet-wide regulation, which effectively means that the only way to comply with the law is for platforms to verify ages.
So adults and minors alike better get comfortable with providing a form of ID every time they go online. This knowledge standard destroys the notion of internet privacy.
I've raised several questions about this bill. But no one, not even the sponsors of the legislation, can answer those questions honestly, because they do not know the answer. They do not know how overzealous regulators or state attorneys general will enforce the provisions in this bill. They do not know what rules the FTC may come up with to enforce its provisions.
The inability to answer those questions is the result of several vague provisions of this bill, and once enacted into law, those questions will not be answered by the elected representatives in Congress, they will be answered by bureaucrats who are likely to empower themselves at the expense of our First Amendment rights.
There are good reasons to think that the courts will strike down this bill. They would have a host of reasons to do so. Vagueness pervades this bill. The most meaningful terms are undefined, making compliance with the bill nearly impossible. Even if we discount the many and obvious First Amendment violations inherent in this bill, the courts will likely find this bill void for vagueness.
But we should not rely on the courts to save America from this poorly drafted bill. The Senate should have rejected KOSA and forced the sponsors to at least provide greater clarity in their bill. The Senate, however, was dedicated to passing a KOSA despite its deficiencies.
KOSA contains too many flaws for any one amendment to fix the legislation entirely. But the Senate should have tackled the most glaring problem with KOSA—that it will silence political, social, and religious speech.
My amendment merely stated that no regulations made under KOSA shall apply to political, social, or religious speech. My amendment was intended to address the legitimate concern that this bill threatens free speech online. If the supporters of this legislation really do want to leave content alone, they would have welcomed and supported my amendment to protect political, social, and religious speech.
But that is not what happened. The sponsors of the bill blocked my amendment from consideration and the Senate was prohibited from taking a vote to protect speech.
That should be a lesson about KOSA. The sponsors did not just silence debate in the Senate. Their bill will silence the American people.
KOSA is a Trojan horse. It purports to protect our children by claiming limitless ability to regulate speech and depriving them of the benefits of the internet, which include engaging with like-minded individuals, expressing themselves freely, as well as participating in debates among others with different opinions.
Opposition to this bill is bipartisan, from advocates on the right to the left.
A pro-life organization, Students for Life Action, commented on KOSA, stating, "Once again, a piece of federal legislation with broad powers and vague definitions threatens pro-life speech…those targeted by a weaponized federal government will almost always include pro-life Americans, defending mothers and their children—born and preborn."
Student for Life Action concluded its statement by stating: "Already the pro-life generation faces discrimination, de-platforming, and short and long term bans on social media on the whims of others. Students for Life Action calls for a No vote on KOSA to prevent viewpoint discrimination from becoming federal policy at the FTC."
The ACLU brought more than 300 high school students to Capitol Hill to urge Congress to vote no on KOSA because, to quote the ACLU, "it would give the government the power to decide what content is dangerous to young people, enabling censorship and endangering access to important resources, like gender identity support, mental health materials, and reproductive healthcare."
Government mandates and censorship will not protect children online. The internet may pose new problems, but there is an age-old solution to this issue. Free minds and parental guidance are the best means to protect our children online.
This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor Discord. In our Bonus Chat at the end of the episode, Mike speaks to Juliet Shen and Camille Francois about the Trust & Safety Tooling Consortium at Columbia School of International and Public Affairs, and the importance of open source tools for trust and safety.
The social media company, already fighting for its existence in the US, now has to contend with a potentially expensive penalty stemming from its policies toward users under 13.
Popular streamer Guy “Dr Disrespect” Beahm has returned to the public eye a little over a month after admitting he sent “inappropriate” messages to a minor on Twitch. The controversial character, who was permanently banned from the platform on June 26, 2020, admitted in a lengthy post on X (formerly Twitter) on June…
Hannah Neeleman is a mother of eight, a beauty queen, a former Juilliard ballerina, and one of the most popular "momfluencers" on social media. She lives on a Utah ranch with her husband, JetBlue airlines heir Daniel Neeleman, and puts out both copious content and pasture-raised meat under the moniker Ballerina Farm. For years, their photogenic Mormon family has been amassing Instagram and TikTok followers—along with ample scrutiny and scorn from certain sorts of progressive-leaning, extremely online women. And these sorts were served a feast last month in the form of a London Times profile, which posited not-at-all-subtly that Hannah was being controlled and coerced by Daniel.
The profile was a little weird and the responses to it weirder. But they are also emblematic of something that goes way beyond Ballerina Farm: an inability to imagine women having different values, different politics, and different ambitions. And a refusal to accept that women may be happy leading all different sorts of lives.
Trad-Wife Tragedy
Times writer Megan Agnew clearly had an opinion about the Neeleman family's dynamics and framed her article to maximize the chances of readers coming away with the same opinion. That's not a journalistic crime by any means—the best profiles often inject some of the writer's own insight. But, to me, Agnew's insights felt shoehorned, and not entirely convincing. The quotes and anecdotes she wielded could betray a patriarchal arrangement in which Hannah is a not-so-enthusiastic participant. Yet there were lots of ways to read them that didn't support such a conclusion, and that's not to mention all the quotes and anecdotes that Agnew necessarily left out.
But freed from what? Hannah has a life that many dream of, it seems. She may not be a professional ballerina, but she still has a highly successful career and a level of fame she likely never would have earned from ballet. She has a beautiful home, a wealthy husband, and eight healthy children whom she gets to raise in a spectacular setting an hour from where she grew up in a family that looks a lot like the one she has now (Hannah was one of nine children).
The interpretations of one journalist who spent a few hours with the family and a cornucopia of strangers' speculation aside, signs suggest Neeleman is happily living the life she wants to be living. It is highly weird to act like the fact that she once dreamed of being a pro ballerina means she's unhappy in any other lifestyle or that she didn't have other ambitions, too (especially since she has also talked about how she always wanted a big family).
Could Hannah be secretly miserable? Sure. But so could anyone.
Poor Little Political Wives
Reactions to Hannah Neeleman conjure that classic second-wave feminist trope: false consciousness. Sure, she says she is happy, fulfilled, and in control of her own destiny—but internet feminists know better. Clearly her claims are either an act (perhaps produced under the duress of a manipulative husband) or the result of being raised in a Mormon household. The poor dear can't even see how oppressed she is!
The Ballerina Farm discourse echoes recent reactions regarding Usha Vance, wife of Republican vice presidential candidate J.D. Vance.
Usha and J.D. met at Yale Law School. Usha also has an undergraduate degree from Yale and a master's degree from Cambridge. Until recently, she was a lawyer with one of the country's top law firms. At the Republican National Convention, she appeared confident and excited as she talked about her husband's candidacy and about their life together, which includes three children. Vance has, on numerous occasions, credited Usha for helping drive and shape him.
By all indications, Usha is an intelligent and accomplished woman who backs her husband's political career. Yet Vance, too, received the Hannah Neeleman treatment following her husband joining the Donald Trump ticket.
People began sharing images in which Usha was not smiling or looked sad as if this was proof that she disapproved of her husband's career, or worse.
The comments about Usha Vance echoed a 2016 election-era refrain: "Free Melania." There were a lot of people then convinced, or at least opining, that Melania Trump wanted no part in her husband's political schemes and was a tragic figure trapped in a loveless and controlling marriage.
I won't pretend to know exactly what's going on between the former president and first lady. But the idea that Melania couldn't leave if she wanted to defies logic. The Melania who is literally trapped is a fiction, invented to further demonize Trump and/or deny that she is culpable in the creation of the life they both lead.
Voting for Harris Is 'in Everyone's Best Interest'
Shades of the same attitude driving this weird anti–fan fic about Usha Vance, Hannah Neeleman, and Melania Trump were detectable during a white women for Kamala Harris call last week.
During that call, author Glennon Doyle posited that the reason many white women are afraid to publicly support Harris and/or other Democratic candidates is fear of being disliked, chastised, or looked down upon. White women don't want to make neighbors "uncomfortable," and they "desperately need to be approved of and liked," Doyle said.
Meanwhile, Shannon Watts, who organized the call, suggested that the reason why many white women vote Republican is because they believe "that it is in our best interest to use our privilege and our support systems of white supremacy and the patriarchy to benefit us."
Voting for Harris is really what's "in everyone's best interest," said Watts.
This sort of rhetoric was common when Hillary Clinton was running for president and again after the election, when it came out that a majority of white women voters cast their ballots for Trump. Is there no room for imagining that some women might just be conservatives and/or dislike the Democratic candidate?
Can't Women Be Individuals?
In the construction of victimhood narratives around Hannah Neeleman, Usha Vance, and Melania Trump, there is an element of projection that is pre-political. Maybe it's rooted in jealousy, anxiety, revulsion, or anger. But for whatever reason, some people seemingly want to believe these women are unhappy. Perhaps it helps them get over their jealousy, or feel better about their own life choices, or feel there's still justice in the world—who knows? But it's clearly not based solely on the evidence laid before us.
The other thread underpinning some attitudes toward Melania Trump, Usha Vance, Hannah Neeleman, and any women who won't vote Democrat is a denial of conservative women's agency.
And while this thread has implications for politics, it also seems born of a realm outside of them. It's the inability—displayed here by the left, but also visible across the political spectrum—to imagine people genuinely believing in things different than what you believe.
In the political realm, this manifests as a conviction that support for different candidates and different policies doesn't come down to a million different factors and values and vibes but stupidity, brainwashing, coercion, and cowardice. Men get this treatment sometimes, too, but it's much more commonly aimed at women.
On the left, this manifests as utter disbelief that women like Hannah Neeleman and Usha Vance could be happy co-pilots in the lives they and their husbands are leading. Or as an insistence that the only reason women would oppose Harris is because they're trying to suck up to or benefit from white supremacy and patriarchy. On the right, we sometimes see it manifested as an assertion that female politicians, high-powered working women, feminist activists, etc., only speak out against conservative policies because they're bitter about their own lives.
Both sides do this at a peril to their own persuasive efforts. You won't win people over by telling them, "You may think you're happy, or expressing true convictions, but you're actually just a cog in cultural Marxism or white supremacist patriarchy."
What makes this especially weird coming from the left is that left-leaning women tend to do this under the mantle of feminism.
But it's not actually feminist to paint all women with one brushstroke. Women are not and will never be a monolith—not in their politics, their professional leanings, their preferred relationship styles, or anything else. Women are happy in as many different types of arrangements as men are, and as capable of choosing for themselves. Conversely, not every woman bristles at the kind of things that make some feminists bristle, including having a horde of children or moderating one's career plans to make this possible.
The sooner self-proclaimed feminists can see women as individuals—including sometimes very flawed individuals—the sooner we'll all be seeing women leading more free and full lives, in all their weird and messy and dazzling forms.
A "White Dudes for Harris" Zoom call reportedly raised $4 million in donations for Vice President Kamala Harris' presidential campaign. After the call, the @dudes4Harris account on X was briefly suspended.
Is this election interference?
If we remain in reality, the answer is of course not.
Even if X CEO Elon Musk ordered the account suspended because of its politics, there would be no (legal) wrongdoing here. X is a private platform, and it doesn't have any obligation to be politically neutral. Explicitly suppressing pro-Harris content would be a bad business model, surely, but it would not be illegal. Musk and the platform formerly known as Twitter have no obligation to equally air conservative and progressive views or give equal treatment to Republican and Democratic candidates.
But there's no evidence that X was deliberately trying to thwart Harris organizers. The dudes4Harris account—which has no direct affiliation to the Harris campaign—was suspended after it promoted and held its Zoom call and was back the next day. That's a pretty bad plan if the goal was to stop its influence or fundraising. And there are all sorts of legitimate reasons why X may have suspended the account.
The account's suspension is "not that surprising," writesTechdirt Editor in Chief Mike Masnick (who, it should be noted, is intensely critical of X policies and Musk himself on many issues). "Shouldn't an account suddenly amassing a ton of followers with no clear official connection to the campaign and pushing people to donate maybe ring some internal alarm bells on any trust and safety team? It wouldn't be a surprise if it tripped some guardwires and was locked and/or suspended briefly while the account was reviewed. That's how this stuff works."
If we step out of reality into the partisan hysteria zone, however, then the account's temporary suspension was clearly an attempt by Musk to sway the 2024 election.
"Musk owns this platform, has endorsed [former President Donald] Trump, is deep into white identity grievance, and just shut down the account that was being used to push back against his core ideology and raise money for Trump's opponent. This is election interference, and it's hard to see it differently," posted political consultant Dante Atkins on X.
"X has SUSPENDED the White Dudes for Harris account (@dudes4harris) after it raised more than $4M for Kamala Harris. This is the real election interference!" Brett Meiselas, co-founder of the left-leaning MeidasTouch News, posted.
Versions of these sentiments are now all over X—which has also been accused of nefariously plotting against the KamalaHQ account and photographer Pete Souza. Some have even gone so far as to suggest that Musk is committing election interference merely by sharing misinformation about Harris or President Joe Biden, or by posting pro-Trump information from his personal account.
We're now firmly in "everything I don't like is election interference" territory. And we've been here before. In 2020, when social media platforms temporarily suppressed links to a story about Hunter Biden or suspended some conservative accounts, it was conservatives who cried foul, while many on the left mocked the idea that this was a plot by platforms to shape the election. Now that the proverbial shoe is on the other foot, progressives are making the same arguments that conservatives did back then.
Musk himself is not immune to this exercise in paranoia and confirmation bias. For whatever reason, Google allegedly wouldn't auto-populate search results with "Donald Trump" when Musk typed in "President Donald." So Musk posted a screenshot about this, asking "election interference?"
Again, in reality: no.
As many have pointed out, Google Search does indeed still auto-populate with Trump for them. So whatever was going on here may have simply been a temporary glitch. Or it may have been something specific to things Musk had previously typed into search.
Even if Google deliberately set out not to have Trump's name auto-populate, it wouldn't be election interference. It would be a weird and questionable business decision, not an illegal one. But the idea that the company would risk the backlash just to take so petty a step is silly. Note that Musk's allegation was not that Google was suppressing search results about Trump, just the auto-population of his name. What is the theory of action here—that people who were going to vote for Trump wouldn't after having to actually type out his name into Google Search? That they somehow wouldn't be able to find information about Trump without an auto-populated search term?
"Please. I beg of people: stop it. Stop it with the conspiracy theories," writes Masnick. "Stop it with the nonsense. If you can't find something you want on social media, it's not because a billionaire is trying to influence an election. It might just be because some antifraud system went haywire or something."
Yes. All of that.
But I suspect a lot of people know this and just don't care. Both sides have learned how to weaponize claims of election interference to harness attention, inspire anger, and garner clout.
Just a reminder: Actual election crimes include things like improperly laundering donations, trying to prevent people from voting, threatening people if they don't vote a certain way, providing false information on voter registration forms, voting more than once, or being an elected official who uses your power in a corrupt way to benefit a particular party or candidate. Trying to persuade people for or against certain candidates does not qualify, even if you're really rich or famous and even if your persuasion relies on misinformation.
So if you feel yourself wanting to fling claims of election interference at X, or Google, or Meta, or some other online platform: stop. Calm down. Take a breath, take a walk, whatever. This is a moral panic. Do not be its foot soldier.
More Sex & Tech News
• The Kids Online Safety Act passed the Senate by a vote of 91-3 yesterday. Sens. Rand Paul (R–Ky.), Ron Wyden (D–Ore.), and Mike Lee (R–Utah) were the only ones who voted against it. (See more of this newsletter's coverage of KOSA here, here, and here.)
New York is introducing two new laws designed to better protect kids online.
The first law would limit feeds to followed accounts, turning off automatic suggestions. The second law would limit the data collected around minors.
Both laws are likely to face opposition in the near future, as not everyone agrees with the approach taken by the state of New York.
Social media is notoriously addictive, especially for young users like children and teenagers. While the US government has shown some interest in protecting our youth online, there hasn’t been much federal progress. As a result, several states have stepped up with their own laws, with New York being the latest to introduce legislation.
Today, Governor Kathy Hochul signed two new bills into law. The first, called the Stop Addictive Feeds Exploitation (SAFE) for Kids Act, requires parental consent for “addictive feeds” in apps. Currently, most social media apps automatically suggest content through custom algorithms. Under this new law, minors will only see videos from accounts they follow, unless they have parental approval for automatic suggestions. The law also prevents platforms from sending notifications about suggested posts to minors between midnight and 6 am, unless there is verifiable parental consent. The next step is to create a system to verify a user’s age and parental consent status. Once the rules are finalized, social media companies will have 180 days to integrate the new regulations into their apps. Companies that fail to comply could face fines of $5,000 per violation.
The second bill, the New York Child Data Protection Act, limits the data platforms can collect on minors without consent and restricts the sale of such data. This law is set to take effect next year.
The laws have received a mixed reception, reflecting the political divide. While there is bipartisan agreement on the need for better online protection for children, the methods to achieve it differ. This division is why federal proposals like the Kids Online Safety Act have stalled. Conservatives often oppose proposals requiring age verification that involves real IDs, fearing government tracking and privacy breaches. Liberals, meanwhile, largely worry that such laws could restrict access to important resources for marginalized groups like the LGBTQ+ community, echoing concerns about educational laws and book bans in conservative states.
These new laws are likely to face significant challenges. In fact, they already are. The industry association NetChoice has sued California over a similar law, the Age-Appropriate Design Code, which was ultimately blocked in court. The judge argued that the law could negatively impact data collection across all ages due to implementation difficulties. NetChoice has already criticized New York’s SAFE for Kids Act as unconstitutional, claiming it could “increase children’s exposure to harmful content by requiring websites to order feeds chronologically, prioritizing recent posts about sensitive topics.” It seems like a lawsuit is all but inevitable here too.
It’s uncertain how these issues will unfold in New York courts, but it’s clear that the new laws are in for a tough journey.
I will now admit that Twitter is dead. I have reached the acceptance stage of my grief.
I still refer to X.com as Twitter out of habit, but the spirit of my refusing to call it X.com… despite the obvious ridiculousness of the name… lived in the hope that Elon was running a business and would make sound business decisions, that perhaps he wouldn’t be a complete tool and ruin a good thing just to stroke his own ego and promote his favorite flavors of white nationalist rhetoric… or possibly just to stop that one guy from tracking his private jet.
There is no going back. That particularly racist white toothpaste isn’t going back into the tube.
So that brings me back to another episode of how things are going in the land of things that once were, or would like to be, Twitter. For Twitter is still the ideal, the archetype, the goal for all of them, though the road to getting there looks very different for each.
Elon previously suggested in a post that Instagram was like a strip club… and he’s not wrong on that at least. But suggesting that our family photos belong on his site then turning around and changing the rules to allow porn? All in a day’s work for Chief Hypocrite and King of the Incels Elon Musk!
Yes, X.com is now a strip club by design.
I am told Elon picked this logo so he could wear his jacket with it
X.com does keep booming, at least relative to its chief rivals. Or so it seems. Both Elon and Zuck fudge their numbers, so it is hard to tell. Meanwhile Elon stooge Linda Yaccarino has to keep claiming progress for the site so has gone all 1984 on us and has started posting about how before Elon the site could only be accessed via Gopher and all messages were in Morse code so X.com can look good via the false comparison… I mean, when she can stop promoting Tesla. You might wonder who is paying her… but then they aren’t looting Twitter to fund Elon’s AI venture the way they are Tesla.
If I could get maybe 15-20 accounts to move to another service… and halt the flow of people wandering back… I could leave X.com behind. Yes, I have 10x more followers on X.com than any competitor, and I get more every day. But most of my old followers are inactive accounts and most of the news ones have *N*U*D*E*S**I*N**B*I*O*.
Though I guess with the rule change they can just say “NUDES IN BIO” without trying to dodge the censors. As Kurt Vonnegut repeated so often in Breakfast of Champions, “Wide open beavers”… or whatever it was. I read it back in the 80s, leave me alone.
On my desktop, in Firefox, with uBlock Origin running, and staying strictly in the Following tab, X.com is usable and gets me the info I am looking for. I even see people I follow who have sworn off X.com back and posting, because it is hard to give up the level of engagement and followers. And in that state I can pretend it is still Twitter if I squint my eyes and stay away from replies by Blue Checkmarks.
And then I am sitting on the couch and look at it on my iPad in Elon’s app and… Good Lord, what is going on in there?! Ads for crypto, white nationalism, and Trump have supplanted Cheech & Chong edibles, block no longer blocks people but is just a soft mute that lets them continue to harass you, you can no longer block noxious advertisers at all, and the algorithm pushes all the most noxious content straight at you, with Elon the king of the shit pile.
I mean, followers and engagement are cool and all, and it is fun to watch Liam Nissan troll the Nazis… oh, and Tom Nichols is back… but it has also broken some people. There are a few people I had to unfollow because they clearly felt X.com was reality and had to fight every battle. The block button is there for a reason people… oh, wait, they broke that, didn’t they?
Anyway, the grand unifying conspiracy theory about X.com right now is that Elon, already an emerald mine racist nepo baby, is going all in on Trump support on the site to woo Trump’s favor in the hope of getting pardons when the time comes… for things like looting Tesla to fund his AI venture. The one thing that is for sure is that the only speech Elon was ever interested in was his own.
In spite of all that, some pundits have declared X.com is still the center of the debate, and it is hard to gainsay their point. Plus… you know… porn. Porn always wins in the end.
BlueSky – As Bad as 2010 Twitter?
Perpetual pedantic grump Tom Nichols, who as noted above has been spotted back on X.com, suggested the other the day that BlueSky was as bad as Twitter… but specifically 2010 Twitter, which is one of those very Tom Nichols things where he has an extremely narrow and specific meaning and context in his head that he won’t share, that nobody else could possibly understand, and that becomes a hill he plans to die on. This habit was probably best exemplified when Tom spent several years fighting against calling Trump a Fascist because real Fascism must come from the Fascismo-Romagna region of Italy, otherwise it is merely Sparkling Totalitarianism. But I digress.
Bluesky? I don’t think this is the logo anymore…
I wish BlueSky was as bad as 2010 Twitter, because 2010 Twitter was pretty fun in my memory… a lot more fun than BlueSky. Even Jack Dorsey says BlueSky is making all the same mistakes as early Twitter… we should be so lucky… but it just can’t quite become Twitter.
Also, Jack left the BlueSky board and is also on X.com buying in on whatever Elon is selling because whatever worm was eating RFK Jr.’s brain has apparently afflicted him as well.
Instead BlueSky is where the very serious people have gone to escape the other sites, but where they all can’t stop talking about those other sites. Seriously, I swear if mentioning or posting pictures from X.com was banned, half the posts would disappear and we’d be left with complaints about Threads and Mastodon. Nobody takes Threads seriously on BlueSky and everybody apparently was stridently lectured once too often about some aspect of Fediverse etiquette on Mastodon and left in a huff because… their sarcasm and wit were not up to the challenge? They couldn’t figure out how to block people? They too have feet of clay? Anyway, they seem to be universally upset at not being welcomed by a cheering crowd for having deigned to join.
Still, for the slim thread of content that isn’t complaining about or reacting to content on the other sites, BlueSky is pretty good. It is can go very heavy on politics with very little interest in entertainment, so lacks the diversity of topics that made Twitter great somewhere between 2010 and Elon, but it could get there.
And some people are trying to help get it there… though I am not sure their efforts are all that effective. I call this the “Neil Gaiman Problem.” I like Neil Gaiman. He is interesting and on BlueSky, so I followed him. Neil Gaiman would very much like BlueSky to succeed so is putting in the effort by interacting with his followers. That means I can look at BlueSky and see 47 messages in a row that are Neil Gaiman replying with a bland pleasantry to every person who responded to something he posted. That, I fear, does not make BlueSky very interesting.
Basically, BlueSky could be good at some point, but it is still getting there.
Threads – Ending is better than mending
Happily news free content since May 2024!
I know this isn’t the Threads logo anymore
Threads is not being taken seriously for good reason. To start with, it is very much Instagram for words, with the same sort of algorithm where you see something in you feed, but if you somehow refresh you’ll never find it again unless you are following the person who posted it and go to their profile. But most of the stuff in your feed is from randos that the algorithm throws at you… and all of it is brands and cat pictures and light, happy fare.
None of it is news, however. The head of Instagram, Adam Mosseri, is on Threads telling people that they are actively suppressing news content, and especially politically focused news content, because it makes people unhappy and distracts from the capitalism and the absolute need to foster desire for luxury goods and expensive vacations. He much prefers content from creatives and you should too! (Also, this might be Zuck’s trying to duck the election influence issue since that would cost money he could otherwise be throwing at the Metaverse or AI or whatever he is on about lately.)
Just to make things even more banal, Threads is planning a swipe left/right option for content… Tinder for cat pictures and luxury goods… though I can’t remember which way means what and it likely won’t work correctly in the browser for another year if history is any guide.
But my greatest issues with Threads is that they only have a phone app that scales up badly to my iPad and that they took the adequate initial web version and forced it to look like and behave like the phone app so it is freaking awful to use now. JFC, these people.
Mastodon – Still the Linux of Social Media
Still the refuge you’re looking for if you want no algorithm and a quiet little silo of people to interact with. Is that social media though? Is there such a thing as anti-social media? Limited social media? Siloed social media?
This can’t possible be the Mastodon logo, can it?
The reputation it has for being filled with strident rule makers who will lecture you about how you violated their internal belief system with something you did or did not do is overblown, but not entirely undeserved. I find that the block button works… in both directions… so that takes care of you intruding on somebody else’s curated reality.
It is the site where, as a percentage, I interact with more of my actual followers… once we pare down the count from all the multiple follows from people who have changed servers… than any other of the Twitter pretenders.
But that number seems to be about six. Six people make for a pretty quiet Discord server, much less a social media experience.
Yeah, I follow other people, lots of people, often people I follow on the other sites because I am not alone on spreading my bets in the hope of finding the Twitter replacement that best suits me. That means I see a lot of things on multiple sites… and my followers who do the same see my stuff in multiple locations. You know who you are. I like your stuff here and then over on BlueSky and sometimes again on X.com.
This situation stops at Threads because, as noted, nobody takes Threads seriously. Well, nobody who follows me elsewhere does. Molly Jong-Fast is trying to take Threads seriously for all of us… but it isn’t working. It is cat pictures and luxury goods and stolen memes all the way down.
And it probably says something about Mastodon that in the middle of writing about it I went off on Threads again. It is also dull, in its own special algorithm free way. If that is what you like, you have found your place.
Spoutible – When One Topic is Enough
As a site Spoutible has some technical issues… I could never stay logged in and the site totally started breaking in Firefox, another victim of the “everybody uses Chrome” mentality of so many developers… so I eventually gave up on it.
One of these must have been the logo at some point, right?
But my persistence there for about 8 months was not rewarded by very much in the way of engagement. There was no room for video games, or entertainment in general, on Spoutible.
Instead it was all political… which wouldn’t be bad, but it was all very much anti-Trump memes. And, while I can very much get behind the sentiment, believing as I do that another term as president would be the end of democracy in the United States, I am not sure that goal is moved forward by participating in an echo chamber. An echo chamber with the right message is still an echo chamber, and I am already on board so don’t need constant reinforcement and reassurance.
Post.news – Ex Post Facto
Post News is dead, having failed to make the cut. It will be remembered as more dull than Mastodon and falling over literally any time Elon sneezed and half a dozen people tried to jump ship.
The logo is in there somewhere I think…
It was not ready for prime time and now it never will be.
Other Outliers
At one point Automattic was trying to promote Tumblr as a possible inheritor of the Twitter crown. I feel like anybody suggesting that had either never used Twitter or never used Tumblr. Also, one follower on Mastodon also follows me on Tumblr where my post go automatically because the same people who own WordPress own Tumblr as well… a fact which might point to the third alternative explanation; lack of a grip on reality.
Substack Notes… well, my opinion there hasn’t changed in a year. It sucked then, existing only as a way to promote your substack and I suspect it sucks now.
So that gets me through the options and… I feel like the only appropriate response is a standard internet meme.
I too do not know what I expected
I just wanted one platform to win out… one that wasn’t run by a horrible racist. But you don’t always get what you want. So it goes.
"We wanted to file a lawsuit that was specifically focused on free speech and the First Amendment from the creators' perspective, rather than some of the other, business-related concerns in other lawsuits," Brad Polumbo of BASEDPolitics tells me. "We also wanted to emphasize the political speech aspect, rather than other creators who are more in the mold of everyday 'influencers,' and show that right-leaning/non-liberal voices are being impacted by this as well."
Polumbo hopes the lawsuit will "help Republicans and conservatives see why this ban is inconsistent with the free speech values they say they care about."
TikTok Ban: Not Just Bad for Lifestyle Influencers or Leftists
BASEDPolitics is a nonprofit media organization run by Polumbo, Hannah Cox, and Jack Hunter. Its goal is to introduce young people "to the ideas of free market capitalism and individual liberty."
TikTok helps them reach audiences they likely wouldn't reach on other platforms, says Cox. "Both Brad and I have large platforms across social media, but TikTok offers a unique audience that can't be found elsewhere," she tells me. "Most on TikTok loathe Meta and X, so if they weren't on TikTok it's unlikely they'd engage meaningfully elsewhere. Their algorithm is also more open, and it enables us to reach many people who would never encounter us otherwise."
There's a popular perception that TikTok either isn't a place for political speech or is an asset only for left-leaning political speakers. But the BASEDPolitics team hasn't found this to be true at all.
"Anyone who thinks TikTok is all just frivolous content is probably not a user," says Polumbo. "There's substantive conversation happening on there on every issue under the sun, from religion to dating to politics." And while "TikTok is dominated by left-leaning content," it's also "a much more politically diverse ecosystem than many might think."
Their suit focuses not just on how a ban would negatively affect BASEDPolitics but on its larger repurcussions for civil liberties.
"We felt the need to stand up as individuals who are using TikTok to effectively fight back against the government and educate others on the principles of free market capitalism, individual rights, and limited government," says Cox, who sees all sorts of "incredible work being done on TikTok—both politically and non politically."
"People are pushing back on war…they're questioning our monetary system, they're highlighting injustices carried out by our government," she says. "Outside of politics, TikTok is now the top search engine for young people. They're getting mental health resources from therapists, DIY help from retired grandpas, nutrition information they can't get from their health insurance and pharmaceutical companies. The list is endless."
Propaganda Is Free Speech
BASEDPolitics is being represented by the Liberty Justice Center. The suit seeks a declaration that the anti-TikTok law—officially known as the Foreign Adversary Controlled Applications Act—is unconstitutional and a block on the U.S. Attorney General enforcing it.
The law makes it illegal for Americans to "access, maintain, or update" apps linked to "foreign adversaries," a category that the measure defines to include TikTok. TikTok will be banned if TikTok parent company ByteDance does not sell it by January 19, 2025. The law also allows the president to declare other apps off limits (or force their sale) if they're based out of any country declared a foreign adversary or if anyone based in these countries owns a fifth or more of the app.
"The Act violates the First Amendment because it bans all speech on TikTok—even though all, or nearly all, of that speech is constitutionally protected," the Liberty Justice Center states in a press release. "The lawsuit also argues that lawmakers' justifications for the ban—national security and protecting Americans from propaganda—cannot justify the infringement on users' First Amendment rights, because there is no evidence that TikTok threatens national security or that a complete ban is necessary to address whatever threat it might pose. Furthermore, the lawsuit argues, the First Amendment does not allow the government to suppress 'propaganda,' which is simply speech."
Cox elaborates on this point in a video about the lawsuit, noting that people act like TikTok is unique because it could be linked to the Chinese Communist Party. Yet "you have tons of state-owned media that is available in the U.S.," points out Cox, citing the BBC and Russia today as two examples.
In the U.S., we don't ban speech merely because another government—even one we find alarming—might endorse it. So even if some of the more speculative fears about China and TikTok are true, that should be no reason to ban it entirely.
Cox says this sort of thing is more befitting of "communist dystopias" such as North Korea.
There's been some (overhyped) concern about TikTok suppressing content that could offend Chinese authorities. But even if that's true, it wouldn't justify a ban either.
"As First Amendment supporters, we also support the legal right of TikTok as a private platform to ban or restrict whatever kinds of content it wants even if we personally resent their choices or think it's unfair," Polumbo adds.
Larger Anti-Speech and Anti-Tech Trends
"If enacted, this would constitute one of the most egregious acts of censorship in modern American history," Cox and Polumbo write, placing the TikTok ban in the midst of larger anti-speech and anti-tech trends:
In the federal and state governments, both Republicans and Democrats have become increasingly anti-free speech in recent years. We've seen a plethora of bills that have sought to strip Americans and their businesses of their right to free expression, many of them presented as necessary to rein in "Big Tech." The TikTok ban is merely the latest iteration of this trend.
The truth is that government actors who want to preserve and expand their own power have a vital interest in taking over the tech industry. Of course the government has yet to see a thriving free market industry it doesn't want to get its hands on. But social media in particular poses a unique threat to the government—which has for decades been able to control the flow of information and the narrative on political issues via its cozy relationship with many in the mainstream media.
We've seen the Biden Administration seek to lasso social media in a similar fashion numerous times over the past couple of years thanks to the bombshell reports released under both the Twitter Files and the Facebook Files—not to mention the government-wide conspiracy to shadowban information on our own government's funding of the Wuhan lab….
The obvious point is that government officials do not want the American people to be able to freely share information, especially information that makes them look bad.
The bottom line, they suggest, is that "if they can control the flow of information, they can control you."
"Social media poses a unique threat to politicians and the government, and that's because for decades…the government could control the narrative, and they could control the narrative because they mostly control the mainstream media," says Cox in her video. "As social media has grown, they have lost more and more control of the narrative, because they are no longer the gatekeepers, and they don't control the gatekeepers anymore."
"Ultimately the war on Big Tech is a war on free speech and the government desperately trying to regain control of the narrative the [mainstream media] granted them for decades," she tells me.
The BASEDPolitics team also pushes back on the idea that this isn't really a ban because it gives ByteDance the option to sell. "In effect, the legislation is an outright ban on the app, because Bytedance, TikTok's parent company, is likely legally prohibited from selling the TikTok algorithm by China's export control laws," write Cox and Polumbo. "And, TikTok without its algorithm is not really TikTok at all."
• Antitrust warriors come for AI: The Federal Trade Commission is subpoenaing Microsoft over its deal with the artificial intelligence startup Inflection. Meanwhile, the Justice Departments "is poised to investigate Nvidia and its leading position in supplying the high-end semiconductors underpinning AI computing," Politicoreports.
• "When a new technology arises, it matters greatly whether technocrats align themselves with dynamists or with reactionaries," Virginia Postrel tellsMiller's Book Review. "We were lucky in the 1990s that both political parties included people with positive views of the emerging internet, including people with a dynamist understanding of its potential. The opposite is true today. Reactionaries are in ascendance in both parties, and technocrats are listening to them. Plus there are always businesses seeking to use regulation to hinder their competitors. The result is that instead of regarding AI as an exciting potential tool for enhancing human creativity and fostering prosperity, our public discourse tends to frame it as at best a job-destroyer and at worst the Terminator."
• A federal judge has rejected North Carolina's attempt to mandate that abortion pills must be taken in a doctor's office and that their prescription requires an in-person followup visit 72 hours after the medication is taken. The ruling means that women "can again take the medicine mifepristone at home and can obtain the medication from a pharmacy or by mail," WUNC reports.
• "Because 'misinformation' is overwhelmingly identified by focusing on information that contradicts the consensus judgements of experts and elites within society's leading knowledge-generating institutions, the focus on misinformation ignores how such institutions can themselves be deeply dysfunctional and problematic," writes Dan Williams in a very good (and lengthy) post at Conspicuous Cognition. "This includes science, intelligence agencies, mainstream media, and so on."
In a welcome development for people who care about liberty, Australia's government suspended its efforts to censor the planet. The country's officials suffered pushback from X (formerly Twitter) and condemnation by free speech advocates after attempting to block anybody, anywhere from seeing video of an attack at a Sydney church. At least for the moment, they've conceded defeat based, in part, on recognition that X is protected by American law, making censorship efforts unenforceable.
A Censor Throws In the Towel
"I have decided to discontinue the proceedings in the Federal Court against X Corp in relation to the matter of extreme violent material depicting the real-life graphic stabbing of a religious leader at Wakeley in Sydney on 15 April 2024," the office of Australia's eSafety Commissioner, Julie Inman Grant, announced last week. "We now welcome the opportunity for a thorough and independent merits review of my decision to issue a removal notice to X Corp by the Administrative Appeals Tribunal."
The free speech battle stems from the stabbing in April of Bishop Mar Mari Emmanuel and Father Isaac Royel at an Orthodox Christian Church by a 16-year-old in what is being treated as an Islamist terrorist incident. Both victims recovered, but Australian officials quickly sought to scrub graphic video footage of the incident from the internet. Most social media platforms complied, including X, which geoblocked access to video of the attack from Australia pending an appeal of the order.
But Australian officials fretted that their countrymen might use virtual private networks (VPNs) to evade the blocks. The only solution, they insisted, was to suppress access to the video for the whole world. X understandably pushed back out of fear of the precedent that would set for the globe's control freaks.
Global Content Battle
"Our concern is that if ANY country is allowed to censor content for ALL countries, which is what the Australian 'eSafety Commissar' is demanding, then what is to stop any country from controlling the entire Internet?" responded X owner Elon Musk.
The Electronic Frontier Foundation (EFF) also argued that "no single country should be able to restrict speech across the entire internet" as did the Foundation for Individual Rights and Expression (FIRE). The organizations jointly sought, and received, intervener status in the case based on "the capacity for many global internet users to be substantially affected."
In short, officials lost control over a tussle they tried to portray as a righteous battle by servants of the people against, in the words of Prime Minister Anthony Albanese, "arrogant billionaire" Elon Musk. Instead, civil libertarians correctly saw it as a battle for free speech against grasping politicians who aren't content to misgovern their own country but reach for control over people outside their borders.
Worse for them, one of their own judges agreed.
"The removal notice would govern (and subject to punitive consequences under Australian law) the activities of a foreign corporation in the United States (where X Corp's corporate decision-making occurs) and every country where its servers are located; and it would likewise govern the relationships between that corporation and its users everywhere in the world," noted Justice Geoffrey Kennett in May as he considered the eSafety commissioner's application to extend an injunction against access to the stabbing video. "The Commissioner, exercising her power under s 109, would be deciding what users of social media services throughout the world were allowed to see on those services."
He added, "most likely, the notice would be ignored or disparaged in other countries."
American Speech Protections Shield the World
This is where the U.S. First Amendment and America's strong protections for free speech come into play to thwart Australian officials' efforts to censor the world.
"There is uncontroversial expert evidence that a court in the US (where X Corp is based) would be highly unlikely to enforce a final injunction of the kind sought by the Commissioner," added Kennett. "Courts rightly hesitate to make orders that cannot be enforced, as it has the potential to bring the administration of justice into disrepute."
Rather than have his government exposed as impotently overreaching to impose its will beyond its borders, Kennett refused to extend the injunction.
Three weeks later, with free speech groups joining the case to argue against eSafety's censorious ambitions, the agency dropped its legal case pending review by the Administrative Appeals Tribunal.
"We are pleased that the Commissioner saw the error in her efforts and dropped the action," responded David Greene and Hudson Hongo for EFF. "Global takedown orders threaten freedom of expression around the world, create conflicting legal obligations, and lead to the lowest common denominator of internet content being available around the world, allowing the least tolerant legal system to determine what we all are able to read and distribute online."
But if the world escaped the grasp of Australia's censors, the country's residents may not be so lucky.
Domestic Censorship Politics
The fight between eSafety and X "isn't actually about the Wakeley church stabbing attacks in April — it's about how much power the government ultimately hands the commissioner once it's finished reviewing the Online Safety Act in October," Ange Lavoipierre wrote for the Australian Broadcasting Corporation.
"The video in dispute in the case against X has been used, in my opinion, as a vehicle for the federal government to push for powers to compel social media companies to enforce rules of misinformation and disinformation on their platforms," agrees Morgan Begg of the free-market Institute of Public Affairs, which opposes intrusive government efforts to regulate online content. "The Federal Court's decision highlights the government's fixation with censorship."
That is, the campaign to force X to suppress video of one crime is largely about domestic political maneuvering for power. But it comes as governments around the world—especially that of the European Union—become increasingly aggressive with their plans to control online speech.
If the battle between Australia's eSafety commissioner and X is any indication, the strongest barrier to international censorship lies in countries—the U.S. in particular—that vigorously protect free speech. From such safe havens, authoritarian officials and their grasping content controls can properly be "ignored or disparaged."
Shooting fireworks out of a helicopter sounds fun. Shooting fireworks out of a helicopter at a Lamborghini sports car sounds really fun, especially if everyone on the helicopter and everyone in the Lamborghini consents. Alex Choi, a YouTube and Instagram vlogger in California, produced a video of him and his crew doing just that. But he forgot to ask one important group for permission: the federal government.
Earlier this week, the feds indicted Choi for "causing the placement of explosive or incendiary device on an aircraft," a crime with a maximum penalty of 10 years in prison. The indictment also revealed that the Federal Aviation Administration (FAA) had revoked the license from Choi's helicopter pilot in January 2024 for flying less than 500 feet from people, failing to display the helicopter's registration number, and creating "a hazard to persons or property" without the necessary FAA waivers.
By all accounts, the only danger was to people directly involved in the video, which has since been removed from Choi's YouTube and Instagram accounts. (Clips of the stunt are still available elsewhere.) Choi and his crew filmed the stunt at El Mirage dry lake bed, an off-roading recreation area miles away from any town. The indictment quotes Choi talking about his "crazy stupid ideas" and one of his crew members saying that the fireworks are "so loud; it's actually terrifying," which only makes the video sound cooler.
The FAA moved very quickly when it caught wind of the stunt. Choi posted the video on the Fourth of July last year. On July 18, an FAA inspector interviewed the person who transported cars for Choi. A few days later, the FAA tracked down the helicopter pilot and a Bureau of Land Management agent went out to the dry lake to photograph Choi's tire tracks. Since the lake bed is federal land, the indictment notes, Choi should have gotten federal permission.
Soon after the FAA interrogations began, Choi texted an associate that the FAA inspector "has a personal issue with my helicopter pilot friend and every time i do a shoot with him, tries to get more information about him so he can go after him," according to the indictment.
The Department of Transportation's Office of Inspector General then decided to charge Choi with a crime. The law against taking an explosive on board an aircraft clearly seems to be aimed at would-be bombers, but the feds argue that it applies to firing explosives out of an aircraft as well.
The case against Choi parallels the case of Austin Haughwout almost a decade ago. In 2015, when consumer drone technology was still in its infancy, the teenage Haughwout filmed himself flying a drone with a pistol attached and firing into the woods. The 14-second video, titled "Flying Gun," caused a national media panic about the danger of armed drones. Haughwout also posted a video of himself roasting meat with a drone-mounted flamethrower.
The FAA subpoenaed Haughwout and his father because the videos showed potentially unsafe piloting of an aircraft. The Haughwout family fought the subpoena in court, arguing that drones are not "aircraft" within the FAA's jurisdiction. (Their lawyer compared the situation to the FAA regulating baseballs, paper airplanes, or birthday balloons.) A district court ruled in favor of the subpoena, and although Haughwout was not charged with an aviation crime, the case became a key precedent for the FAA's ability to regulate drones.
Since then, the FAA has scoured social media for potential drone violations. Earlier this year, a federal court banned Philadelphia YouTuber Michael DiCiurcio from flying drones and fined him $182,000 for violating FAA rules. DiCiurcio had gotten famous for making slapstick videos of himself fighting birds, buzzing fishermen, and crashing into himself with his drone, all while narrating in a thick South Philly accent.
Last year, aviation vlogger Joe Costanza had a friend follow his small Piper Cub airplane down a private runway with a drone. When Costanza posted the video to a Facebook group—and joked that "the pilot knew that the drone was there because I was flying both at the same time"—he was contacted by an FAA inspector. In the end, the FAA did not press any charges, but Constanza took to YouTube to complain about the investigation.
"You know, no matter how stupid the complaint is or how out of the ordinary it is, we have to investigate every single complaint that comes out way," the inspector said, according to Constanza.
Misinformation is not a new problem, but there are plenty of indications that the advent of social media has made things worse. Academic researchers have responded by trying to understand the scope of the problem, identifying the most misinformation-filled social media networks, organized government efforts to spread false information, and even prominent individuals who are the sources of misinformation.
All of that's potentially valuable data. But it skips over another major contribution: average individuals who, for one reason or another, seem inspired to spread misinformation. A study released today looks at a large panel of Twitter accounts that are associated with US-based voters (the work was done back when X was still Twitter). It identifies a small group of misinformation superspreaders, which represent just 0.3 percent of the accounts but are responsible for sharing 80 percent of the links to fake news sites.
While you might expect these to be young, Internet-savvy individuals who automate their sharing, it turns out this population tends to be older, female, and very, very prone to clicking the "retweet" button.
You’ve all heard the reports about how the internet, social media, and phones are apparently destroying everyone’s well being and mental health. Hell there’s a best selling book and its author making the rounds basically everywhere, insisting that the internet and phones are literally “rewiring” kids minds to be depressed. We’ve pointed out over and over again that the research does not appear to support this finding.
And, really, if the data supported such a finding, you’d think that a new study looking at nearly 2 and a half million people across 168 countries would… maybe… find such an impact?
Instead, the research seems to suggest much more complex relationships, in which for many people, this ability to connect with others and with information are largely beneficial. For many others, it’s basically neutral. And for a small percentage of people, there does appear to be a negative relationship, which we should take seriously. However, it often appears that that negative relationship is one where those who are already dealing with mental health or other struggles, turn to the internet when they have no where else to go, and may do so in less than helpful ways.
The Oxford Internet Institute has just released another new study by Andrew Przybylski and Matti Vuorre, showing that there appears to be a general positive association between internet usage and wellbeing. You can read the full study here, given that it has been published as open access (and under a CC BY 4.0 license). We’ve also embedded it below if you just want to read it there.
As with previous studies done by Vuorre and Przbylski, this one involves looking at pretty massive datasets, rather than very narrow studies of small sample sizes.
We examined whether having (mobile) internet access or actively using the internet predicted eight well-being outcomes from 2006 to 2021 among 2,414,294 individuals across 168 countries. We first queried the extent to which well-being varied as a function of internet connectivity. Then, we examined these associations’ robustness in a multiverse of 33,792 analysis specifications. Of these, 84.9% resulted in positive and statistically significant associations between internet connectivity and well-being. These results indicate that internet access and use predict well-being positively and independently from a set of plausible alternatives.
Now, it’s important to be clear here, as we have been with studies cited for the opposite conclusion: this is a correlational study, and is not suggesting a direct causal relationship between having internet access and wellbeing. But, if (as folks on the other side claim) internet access was truly rewiring brains and making everyone depressed, it’s difficult to see how then we would see these kinds of outcomes.
People like Jonathan Haidt have argued that these kinds of studies obscure the harm done to teens (and especially teenaged girls) as his way of dismissing these sorts of studies. However, it’s nice to see the researchers here try to tease out possible explanations, to make sure such things weren’t hidden in the data:
Because of the large number of predictors, outcomes, subgroups to analyze, and potentially important covariates that might theoretically explain observed associations, we sought out a method of analysis to transparently present all the analytical choices we made and the uncertainty in the resulting analyses. Multiverse analysis (Steegen et al., 2016) was initially proposed to examine and transparently present variability in findings across heterogeneous ways of treating data before modeling them (see also Simonsohn et al., 2020). We therefore conducted a series of multiverse analyses where we repeatedly fitted a similar model to potentially different subgroups of the data using potentially different predictors, outcomes, and covariates.
That allowed them to explore questions regarding different subgroups. And while they did find one “negative association” among young women, it was not in the way you might have heard or would have thought of. There was a “negative association” between “community well-being” and internet access:
We did, however, observe a notable group of negative associations between internet use and community well-being. These negative associations were specific to young (15–24-year-old) women’s reports of community well-being. They occurred across the full spectrum of covariate specifications and were thereby not likely driven by a particular model specification. Although not an identified causal relation, this finding is concordant with previous reports of increased cyberbullying (Przybylski & Bowes, 2017) and more negative associations between social media use and depressive symptoms (Kelly et al., 2018; but see Kreski et al., 2021). Further research should investigate whether low community well-being drives engagement with the internet or vice versa.
This took me a moment to understand, but after reading the details, it’s showing that (1) if you were a 15 to 24-year old woman and (2) if you said in the survey that you really liked where you live (3) you were less likely to have accessed the internet over the past seven days. That was the only significant finding of that nature. That same cohort did not show a negative correlation for other areas of well being around fulfilment and such.
To be even more explicit: the “negative association” was only with young women who answered that they strongly agree with the statement “the city or area where you live is a perfect place for you” and then answered the question “have you used the internet in the past seven days.” There were many other questions regarding well-being that didn’t have such a negative association. This included things like rating how their life was from “best” to “worst” on a 10 point scale, and whether or not respondents “like what you do every day.”
So, what this actually appears to do is support is the idea that if you are happy with where you live (happy in your community) than you may be less focused on the internet. But, for just about every other measure of well-being it’s strongly correlated in a positive way with internet access. There are a few possible explanations for this, but at the very least it might support the theory that the studies of those who are both facing mental health problems and excessive internet usage may stem from problems outside of the internet, leading them to turn to the internet for a lack of other places to turn.
The authors are careful to note the limitations of their findings, and recognize that human beings are complex:
Nevertheless, our conclusions are qualified by a number of factors. First, we compared individuals to each other. There are likely myriad other features of the human condition that are associated with both the uptake of internet technologies and well-being in such a manner that they might cause spurious associations or mask true associations. For example, because a certain level of income is required to access the internet and income itself is associated with well-being, any simple association between internet use and well-being should account for potential differences in income levels. While we attempted to adjust for such features by including various covariates in our models, the data and theory to guide model selection were both limited.
Second, while between-person data such as we studied can inform inferences about average causal effects, longitudinal studies that track individuals and their internet use over time would be more informative in understanding the contexts of how and why an individual might be affected by internet technologies and platforms (Rohrer & Murayama, 2021).
Third, while the constructs that we studied represent the general gamut of well-being outcomes that are typically studied in connection to digital media and technology, they do not capture everything, nor are they standard and methodically validated measures otherwise found in the psychological literature. That is, the GWP data that we used represent a uniquely valuable resource in terms of its scope both over time and space. But the measurement quality of its items and scales might not be sufficient to capture the targeted constructs in the detailed manner that we would hope for. It is therefore possible that there are other features of well-being that are differently affected by internet technologies and that our estimates might be noisier than would be found using psychometrically validated instruments. Future work in this area would do well in adopting a set of common validated measures of well-being (Elson et al., 2023).
On the whole it’s great to see more research and more data here, suggesting that, yes, there is a very complex relationship between internet access and wellbeing, but it should be increasingly difficult to claim that internet access is an overall negative and harmful, no matter what the popular media and politicians tell you.
A social media app from China is said to seduce our teenagers in ways that American platforms can only dream of. Gen Z has already wasted half a young lifetime on videos of pranks, makeup tutorials, and babies dubbed to talk like old men. Now computer sorcerers employed by a hostile government allegedly have worse in store. Prohibit this "digital fentanyl," the argument goes, or the Republic may be lost.
And so President Joe Biden signed the Protecting Americans from Foreign Adversary Controlled Applications Act of 2024, which requires the China-based company ByteDance to either spin-off TikTok or watch it be banned. Separating the company from the app would supposedly solve the other problem frequently blamed on TikTok: the circle linking U.S. users' personal data to the Chinese Communist Party. The loop has already been cut, TikTok argues, because American users' data are now stored with Oracle in Texas. That's about as believable as those TikTok baby talk vignettes, retorts Congress.
If Congress has got the goods on the Communists, do tell! Those Homeland Security threat assessment color charts from the 2000s are tan, rested, and ready. But slapping a shutdown on a company because of mere rumors—that really is an ugly import from China.
The people pushing for TikTok regulation argue that the app's problems go far further than the challenges raised when kids burn their brains on Snap, Insta/Facebook, Twitter/X, Pinterest, YouTube/Google, and the rest of the big blue Internet. In The Music Man, Henry Hill swept a placid town into frenzy with his zippy rendition of the darkness that might lurk in an amusement parlor. Today we're told that TikTok is foreign-owned and addictive, that its algorithms may favor anti-American themes, and that it makes U.S. users sitting ducks for backdoor data heists.
Though the bill outlaws U.S. access to TikTok if ByteDance cannot assign the platform to a non-Chinese enterprise within 9–12 months (which the company says it will not do), prediction markets give the ban only a 24 percent chance of kicking in by May 2025. Those low odds reflect, in part, the high probability that the law will be found unconstitutional. ByteDance has already filed suit. It is supported by the fact that First Amendment rights extend to speakers of foreign origin, as U.S. courts have repeatedly explained.
The Qatar-based Al Jazeera bought an entire American cable channel, Current TV—part owner Al Gore pocketed $100 million for the sale in 2013—to bring its slant to 60 million U.S. households. Free speech reigned and the market ruled: Al Jazeera got only a tiny audience share and exited just a few years later.
Writing in The Free Press, Rep. Michael Gallagher (R–Wisc.)—co-sponsor of the TikTok bill—claims that because the Chinese Communist Party allegedly "uses TikTok to push its propaganda and censor views," the United States must move to block. This endorsement of the Chinese "governing system" evinces no awareness of the beauty of our own. We can combat propaganda with our free press (including The Free Press). Of greatest help is that the congressman singles out the odious views that the Chinese potentates push: on Tiananmen, Muslims, LGBTQ issues, Tibet, and elsewise.
Our federal jurists will do well to focus on Gallagher's opening salvo versus TikTok: "A growing number of Americans rely on it for their news. Today, TikTok is the top search engine for more than half of Gen Z." This underscores the fact that his new rules are not intended to be "content neutral."
Rather than shouting about potential threats, TikTok's foes should report any actual mendacities or violations of trust. Where criminal—as with illicitly appropriating users' data—such misbehavior should be prosecuted by the authorities. Yet here the National Security mavens have often gone AWOL.
New York Times reporter David Sanger, in The Perfect Weapon (2018), provides spectacular context. In about the summer of 2014, U.S. intelligence found that a large state actor—presumed by officials to be China—had hacked U.S.-based servers and stolen data for 22 million current and former U.S. government employees. More than 4 million of these victims lost highly personal information, including Social Security numbers, medical records, fingerprints, and security background checks. The U.S. database had been left unencrypted. It was a flaw so sensational that, when the theft was finally discovered, it was noticed that the exiting data was (oddly) encrypted, an upgrade the hackers had conscientiously supplied so as to carry out their burgle with stealth.
Here's the killer: Sanger reports that "the administration never leveled with the 22 million Americans whose data were lost—except by accident." The victims simply got a note that "some of their information might have been lost" and were offered credit-monitoring subscriptions. This was itself a bit of a ruse; the hack was identified as a hostile intelligence operation because the lifted data was not being sold on the Dark Web.
Hence, a vast number of U.S. citizens—including undercover agents—have presumably been compromised by China. This has ended careers, and continues to threaten victims, without compensation or even real disclosure.
The accidental government acknowledgment came in a slip of the tongue by National Security Chief James Clapper: "You kind of have to salute the Chinese for what they did." At a 2016 hearing just weeks later, Sen. John McCain (R–Ariz.) drilled Clapper on the breach, demanding to know why the attack had gone unreported. Clapper's answer? "Because people who live in glass houses shouldn't throw rocks." An outraged McCain could scarcely believe it. "So it's OK for them to steal our secrets that are most important, because we live in a glass house. That is astounding."
While keeping the American public in the dark about real breaches, U.S. officials raise the specter of a potential breach to trample free speech. The TikTok ban is Fool's Gold. The First Amendment is pure genius. Let's keep one of them.
Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.
Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.
The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.
Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.
But previous research has shown there is little evidence on whether the bans actually achieve these aims.
Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.
A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.
What does recent research say about phone bans in schools?
Our study
We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.
Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.
A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.
Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.
In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.
But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.
The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.
Academic achievement
Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.
Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.
But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.
Mental health and wellbeing
Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).
Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.
The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.
So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.
Bullying and cyberbullying
Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.
Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.
But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.
The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.
There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.
Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.
What does this tell us?
Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.
As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.
Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.
Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.
Earlier this week, tech writer and Google Ventures general partner M.G. Siegler received a late-night notification from Meta informing him that his accounts on Instagram, Threads, and Facebook had been suspended for a horrifying alleged violation that he refuses to even name. — Read the rest
Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.
But previous research has shown there is little evidence on whether the bans actually achieve these aims.
Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.
A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.
What does recent research say about phone bans in schools?
Our study
We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.
Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.
A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.
Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.
In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.
But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.
The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.
Academic achievement
Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.
Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.
But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.
Mental health and wellbeing
Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).
Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.
The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.
So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.
Bullying and cyberbullying
Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.
Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.
But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.
The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.
There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.
Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.
What does this tell us?
Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.
As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.
Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.
Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.
Earlier this week, tech writer and Google Ventures general partner M.G. Siegler received a late-night notification from Meta informing him that his accounts on Instagram, Threads, and Facebook had been suspended for a horrifying alleged violation that he refuses to even name. — Read the rest
Mobile phones are currently banned in all Australian state schools and many Catholic and independent schools around the country. This is part of a global trend over more than a decade to restrict phone use in schools.
But previous research has shown there is little evidence on whether the bans actually achieve these aims.
Many places that restricted phones in schools before Australia did have now reversed their decisions. For example, several school districts in Canada implemented outright bans then revoked them as they were too hard to maintain. They now allow teachers to make decisions that suit their own classrooms.
A ban was similarly revoked in New York City, partly because bans made it harder for parents to stay in contact with their children.
What does recent research say about phone bans in schools?
Our study
We conducted a “scoping review” of all published and unpublished global evidence for and against banning mobile phones in schools.
Our review, which is pending publication, aims to shed light on whether mobile phones in schools impact academic achievement (including paying attention and distraction), students’ mental health and wellbeing, and the incidence of cyberbullying.
A scoping review is done when researchers know there aren’t many studies on a particular topic. This means researchers cast a very inclusive net, to gather as much evidence as possible.
Our team screened 1,317 articles and reports as well as dissertations from masters and PhD students. We identified 22 studies that examined schools before and after phone bans. There was a mix of study types. Some looked at multiple schools and jurisdictions, some looked at a small number of schools, some collected quantitative data, others sought qualitative views.
In a sign of just how little research there is on this topic, 12 of the studies we identified were done by masters and doctoral students. This means they are not peer-reviewed but done by research students under supervision by an academic in the field.
But in a sign of how fresh this evidence is, almost half the studies we identified were published or completed since 2020.
The studies looked at schools in Bermuda, China, the Czech Republic, Ghana, Malawi, Norway, South Africa, Spain, Sweden, Thailand, the United Kingdom and the United States. None of them looked at schools in Australia.
Academic achievement
Our research found four studies that identified a slight improvement in academic achievement when phones were banned in schools. However, two of these studies found this improvement only applied to disadvantaged or low-achieving students.
Some studies compared schools where there were partial bans against schools with complete bans. This is a problem because it confuses the issue.
But three studies found no differences in academic achievement, whether there were mobile phone bans or not. Two of these studies used very large samples. This masters thesis looked at 30% of all schools in Norway. Another study used a nationwide cohort in Sweden. This means we can be reasonably confident in these results.
Mental health and wellbeing
Two studies in our review, including this doctoral thesis, reported mobile phone bans had positive effects on students’ mental health. However, both studies used teachers’ and parents’ perceptions of students’ wellbeing (the students were not asked themselves).
Two other studies showed no differences in psychological wellbeing following mobile phone bans. However, three studies reported more harm to students’ mental health and wellbeing when they were subjected to phone bans.
The students reported they felt more anxious without being able to use their phone. This was especially evident in one doctoral thesis carried out when students were returning to school after the pandemic, having been very reliant on their devices during lockdown.
So the evidence for banning mobile phones for the mental health and wellbeing of student is inconclusive and based only on anecdotes or perceptions, rather than the recorded incidence of mental illness.
Bullying and cyberbullying
Four studies reported a small reduction in bullying in schools following phone bans, especially among older students. However, the studies did not specify whether or not they were talking about cyberbullying.
Teachers in two other studies, including this doctoral thesis, reported they believed having mobile phones in schools increased cyberbullying.
But two other studies showed the number of incidents of online victimisation and harassment was greater in schools with mobile phone bans compared with those without bans. The study didn’t collect data on whether the online harassment was happening inside or outside school hours.
The authors suggested this might be because students saw the phone bans as punitive, which made the school climate less egalitarian and less positive. Other research has linked a positive school climate with fewer incidents of bullying.
There is no research evidence that students do or don’t use other devices to bully each other if there are phone bans. But it is of course possible for students to use laptops, tablets, smartwatches or library computers to conduct cyberbullying.
Even if phone bans were effective, they would not address the bulk of school bullying. A 2019 Australian study found 99% of students who were cyberbullied were also bullied face-to-face.
What does this tell us?
Overall, our study suggests the evidence for banning mobile phones in schools is weak and inconclusive.
As Australian education academic Neil Selwyn argued in 2021, the impetus for mobile phone bans says more about MPs responding to community concerns rather than research evidence.
Politicians should leave this decision to individual schools, which have direct experience of the pros or cons of a ban in their particular community. For example, a community in remote Queensland could have different needs and priorities from a school in central Brisbane.
Mobile phones are an integral part of our lives. We need to be teaching children about appropriate use of phones, rather than simply banning them. This will help students learn how to use their phones safely and responsibly at school, at home and beyond.
Following revelations about the extent of the federal government's pressure on social media companies to suppress dissenting opinions, the feds broke up with Meta, X (formerly Twitter), and YouTube. Cybersecurity experts now frequently complain about the lack of coordination between the government and the platforms, warning that social media users are vulnerable to misinformation about elections, foreign interference, and other woes.
But the platforms might be receiving late-night "you up?" texts from federal agents once again. Senate Intelligence Committee Chair Mark Warner (D–Va.) told reporters on Monday that communication between the federal government and social media sites is back on, according to Nextgov and The Federalist.
In fact, Warner said these communications had resumed in the midst of oral arguments for Murthy v. Missouri, the Supreme Court case that will decide whether the FBI, the Centers for Disease Control and Prevention (CDC), and the Biden White House had violated the First Amendment when they pushed social media sites to remove disfavored content. The justices seemed at least somewhat skeptical, viewing the government's actions as mere attempts at persuasion rather than coercion. That skepticism has apparently given the feds the green light, with Warner acknowledging that "there seemed to be a lot of sympathy that the government ought to have at least voluntary communications" with the platforms.
Whether social media companies ever viewed these communications as "voluntary" is an open question. For instance, when then–White House Communications Director Kate Bedingfield suggested tinkering with Section 230—the federal law that protects online platforms from some liability—in order to punish Facebook, CEO Mark Zuckerberg might have wondered whether he had much of a choice but to comply.
In any case, it seems clear that federal agencies will continue to interact with social media companies in ways that trouble many libertarians—until and unless they are explicitly forbidden from doing so.
This Week on Free Media
The Spectator's Amber Athey is back to discuss waning liberal anxiety about Donald Trump's potential return to power, Jen Psaki's advice for President Joe Biden's comms team, and South Dakota Gov. Kristi Noem's doggone media tour.
Worth Watching
Now this is podracing: It's the 25th anniversary of Star Wars: Episode I — The Phantom Menace, and the much-maligned first prequel film has returned to theaters. This is as good a time as any for me to reiterate my once-controversial, now increasingly accepted opinion that the Star Wars prequels are OK. (It's truly heretical to say that they are better than the original films; that is my view, though I won't try to defend it here.) They are certainly way, way better than the new films, which are dull, joyless, and derivative.
The best thing about the prequels is Palpatine's manipulations, and those only come into full focus later on. Phantom Menace is thus the least appealing of the three, as it's the one most obviously aimed at children. But there's nothing wrong with that; I was 9 years old when I first saw the film, and like virtually every other kid at that time, I thought Darth Maul's appearance and climactic duel with Obi-Wan Kenobi and Qui-Gon Jinn was pretty much the coolest thing I'd ever watched. And it still holds up!
Since Instagram is all about visuals, the plain background of your DMs might seem bland. The social media app lets you change the theme to spice things up. You can choose from various patterns and colors that match your aesthetic and customize the appearance of your messages based on who youre chatting with. This guide shows you how to change your Instagram theme on a budget Android phone, a flagship device, or an iPhone.
Apparently, the world needs even more terrible bills that let ignorant senators grandstand to the media about how they’re “protecting the kids online.” There’s nothing more serious to work on than that. The latest bill comes from Senators Brian Schatz and Ted Cruz (with assists from Senators Chris Murphy, Katie Britt, Peter Welch, Ted Budd, John Fetterman, Angus King, and Mark Warner). This one is called the “The Kids Off Social Media Act” (KOSMA) and it’s an unconstitutional mess built on a long list of debunked and faulty premises.
It’s especially disappointing to see this from Schatz. A few years back, I know his staffers would regularly reach out to smart people on tech policy issues in trying to understand the potential pitfalls of the regulations he was pushing. Either he’s no longer doing this, or he is deliberately ignoring their expert advice. I don’t know which one would be worse.
The crux of the bill is pretty straightforward: it would be an outright ban on social media accounts for anyone under the age of 13. As many people will recognize, we kinda already have a “soft” version of that because of COPPA, which puts much stricter rules on sites directed at those under 13. Because most sites don’t want to deal with those stricter rules, they officially limit account creation to those over the age of 13.
In practice, this has been a giant mess. Years and years ago, Danah Boyd pointed this out, talking about how the “age 13” bit is a disaster for kids, parents, and educators. Her research showed that all this generally did was to have parents teach kids that “it’s okay to lie,” as parents wanted kids to use social media tools to communicate with grandparents. Making that “soft” ban a hard ban is going to create a much bigger mess and prevent all sorts of useful and important communications (which, yeah, is a 1st Amendment issue).
Schatz’s reasons put forth for the bill are just… wrong.
No age demographic is more affected by the ongoing mental health crisis in the United States than kids, especially young girls. The Centers for Disease Control and Prevention’s Youth Risk Behavior Survey found that 57 percent of high school girls and 29 percent of high school boys felt persistently sad or hopeless in 2021, with 22 percent of all high school students—and nearly a third of high school girls—reporting they had seriously considered attempting suicide in the preceding year.
Gosh. What was happening in 2021 with kids that might have made them feel hopeless? Did Schatz and crew simply forget about the fact that most kids were under lockdown and physically isolated from friends for much of 2021? And that there were plenty of other stresses, including millions of people, including family members, dying? Noooooo. Must be social media!
Studies have shown a strong relationship between social media use and poor mental health, especially among children.
Note the careful word choice here: “strong relationship.” They won’t say a causal relationship because studies have not shown that. Indeed, as the leading researcher in the space has noted, there continues to be no real evidence of any causal relationship. The relationship appears to work the other way: kids who are dealing with poor mental health and who are desperate for help turn to the internet and social media because they’re not getting help elsewhere.
Maybe offer a bill that helps kids get access to more resources that help them with their mental health, rather than taking away the one place they feel comfortable going? Maybe?
From 2019 to 2021, overall screen use among teens and tweens (ages 8 to 12) increased by 17 percent, with tweens using screens for five hours and 33 minutes per day and teens using screens for eight hours and 39 minutes.
I mean, come on Schatz. Are you trolling everyone? Again, look at those dates. WHY DO YOU THINK that screen time might have increased 17% for kids from 2019 to 2021? COULD IT POSSIBLY BE that most kids had to do school via computers and devices at home, because there was a deadly pandemic making the rounds?
Maybe?
Did Schatz forget that? I recognize that lots of folks would like to forget the pandemic lockdowns, but this seems like a weird way to manifest that.
I mean, what a weird choice of dates to choose. I’m honestly kind of shocked that the increase was only 17%.
Also, note that the data presented here isn’t about an increase in social media use. It could very well be that the 17% increase was Zoom classes.
Based on the clear and growing evidence, the U.S. Surgeon General issued an advisory last year, calling for new policies to set and enforce age minimums and highlighting the importance of limiting the use of features, like algorithms, that attempt to maximize time, attention, and engagement.
Wait. You mean the same Surgeon General’s report that denied any causal link between social media and mental health (which you falsely claim has been proved) and noted just how useful and important social media is to many young people?
From that report, which Schatz misrepresents:
Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.
Did Schatz’s staffers just, you know, skip over that part of the report or nah?
The bill also says that companies need to not allow algorithmic targeting of content to anyone under 17. This is also based on a widely believed myth that algorithmic content is somehow problematic. No studies have legitimately shown that of current algorithms. Indeed, a recent study showed that removing algorithmic targeting leads to people being exposed to more disinformation.
Is this bill designed to force more disinformation on kids? Why would that be a good idea?
Yes, some algorithms can be problematic! About a decade ago, algorithms that tried to optimize solely for “engagement” definitely created some bad outcomes. But it’s been a decade since most such algorithms have been designed that way. On most social media platforms, the algorithms are designed in other ways, taking into account a variety of different factors, because they know that optimizing just on engagement leads to bad outcomes.
Then the bill tacks on Cruz’s bill to require schools to block social media. There’s an amusing bit when reading the text of that part of the law. It says that you have to block social media on “federally funded networks and devices” but also notes that it does not prohibit “a teacher from using a social media platform in the classroom for educational purposes.”
But… how are they going to access those if the school is required by law to block access to such sites? Most schools are going to do a blanket ban, and teachers are going to be left to do what? Show kids useful YouTube science videos on their phones? Or maybe some schools will implement a special teacher code that lets them bypass the block. And by the end of the first week of school half the kids in the school will likely know that password.
What are we even doing here?
Schatz has a separate page hyping up the bill, and it’s even dumber than the first one above. It repeats some of the points above, though this time linking to Jonathan Haidt, whose work has been trashed left, right, and center by actual experts in this field. And then it gets even dumber:
Big Tech knows it’s complicit – but refuses to do anything about it…. Moreover, the platforms know about their central role in turbocharging the youth mental health crisis. According to Meta’s own internal study, “thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” It concluded, “teens blame Instagram for increases in the rate of anxiety and depression.”
This is not just misleading, it’s practically fraudulent misrepresentation. The study Schatz is citing is one that was revealed by Frances Haugen. As we’ve discussed, it was done because Meta was trying to understand how to do better. Indeed, the whole point of that study was to see how teens felt about using social media in 12 different categories. Meta found that most boys felt neutral or better about themselves in all 12 categories. For girls, it was 11 out of 12. It was only in one category, body image, where the split was more pronounced. 32% of girls said that it made them feel worse. Basically the same percentage said it had no impact, or that it made them feel better.
Also, look at that slide’s title. The whole point of this study was to figure out if they were making kids feel worse in order to look into how to stop doing that. And now, because grandstanders like Schatz are falsely claiming that this proves they were “complicit” and “refuse to do anything about it,” no social media company will ever do this kind of research again.
Because, rather than proactively looking to see if they’re creating any problems that they need to try to fix, Schatz and crew are saying “simply researching this is proof that you’re complicit and refuse to act.”
Statements like this basically ensure that social media companies stick their heads in the sand, rather than try to figure out where harm might be caused and take steps to stop that harm.
Why would Schatz want to do that?
That page then also falsely claims that the bill does not require age verification. This is a silly two-step that lying politicians claim every time they do this. Does it directly mandate age verification? No. But, by making the penalties super serious and costly for failing to stop kids from accessing social media that will obviously drive companies to introduce stronger age verification measures that are inherently dangerous and an attack on privacy.
Perhaps Schatz doesn’t understand this, but it’s been widely discussed by many of the experts his staff used to talk to. So, really, he has no excuse.
The FAQ also claims that the bill will pass constitutional muster, while at the same time admitting that they know there will be lawsuits challenging it:
Yes. As, for example, First Amendment expert Neil Richards explains, “[i]nstead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers.” The Supreme Court has long held that the government has the right to regulate products to protect children, including by, for instance, restricting the sale of obscene content to minors. As Richards explains: “[i]n the same way a crowded bar or nightclub is no place for a child on their own”—or in the way every state in the country requires parental consent if it allows a minor to get a tattoo—“this rule would set a reasonable minimum age and maturity limitation for social media customers.”
While we expect legal challenges to any bill aimed at regulating social media companies, we are confident that this content-neutral bill will pass constitutional muster given the government interests at play.
There are many reasons why this is garbage under the law, but rather than breaking them all down (we’ll wait for judges to explain it in detail), I’ll just point out the major tell is in the law itself. In the definition of what a “social media platform” is in the law, there is a long list of exceptions of what the law does not cover. It includes a few “moral panics of yesteryear” that gullible politicians tried to ban and were found to have violated the First Amendment in the process.
It explicitly carves out video games and content that is professionally produced, rather than user-generated:
Remember the moral panics about video games and TV destroying kids’ minds? Yeah. So this child protection bill is hasty to say “but we’re not banning that kind of content!” Because whoever drafted the bill recognized that the Supreme Court has already made it clear that politicians can’t do that for video games or TV.
So, instead, they have to pretend that social media content is somehow on a whole different level.
But it’s not. It’s still the government restricting access to content. They’re going to pretend that there’s something unique and different about social media, and that they’re not banning the “content” but rather the “place” and “manner” of accessing that content. Except that’s laughable on its face.
You can see that in the quote above where Schatz does the fun dance where he first says “it’s okay to ban obscene content to minors” and then pretends that’s the same as restrictions on access to a bar (it’s not). One is about the content, and one is about a physical place. Social media is all about the content, and it’s not obscene content (which is already an exception to the First Amendment).
And, the “parental consent” for tattoos… I mean, what the fuck? Literally 4 questions above in the FAQ where that appears Schatz insists that his bill has nothing about parental consent. And then he tries to defend it by claiming it’s no different than parental consent laws?
The FAQ also claims this:
This bill does not prevent LGBTQ+ youth from accessing relevant resources online and we have worked closely with LGBTQ+ groups while crafting this legislation to ensure that this bill will not negatively impact that community.
I mean, it’s good you talked to some experts, but I note that most of the LGBTQ+ groups I’m aware of are not listed on your list of “groups supporting the bill” on the very same page. That absence stands out.
And, again, the Surgeon General’s report that you misleadingly cited elsewhere highlights how helpful social media can be to many LGBTQ+ youth. You can’t just say “nah, it won’t harm them” without explaining why all those benefits that have been shown in multiple studies, including the Surgeon General’s report, somehow don’t get impacted.
There’s a lot more, but this is just a terrible bill that would create a mess. And, I’m already hearing from folks in DC that Schatz is trying to get this bill added to the latest Christmas tree of a bill to reauthorize the FAA.
It would be nice if we had politicians looking to deal with the actual challenges facing kids these days, including the lack of mental health support for those who really need it. Instead, we get unconstitutional grandstanding nonsense bills like this.
Everyone associated with this bill should feel ashamed.
The idea of voice-based social networks isn’t new. Clubhouse boomed during the Covid era and floundered soon after. However, Clubhouse’s moment of fame led to several copycats and even spurred the addition of voice to existing platforms like Twitter.
On the face of it, Airchat (Play Store) is yet another take on the old formula. So, why has it gotten so much attention recently?
What is Airchat?
Credit: Dhruv Bhutani / Android Authority
In 2023, Naval Ravikant, the co-founder of AngelList and early-stage investor in companies like Uber and Twitter, introduced the first iteration of Airchat. While I never got around to using it, a quick look around suggests that the original ambition erred towards a hybrid of Clubhouse and Instagram. It bombed. But Airchat is back, and this time, the approach is decidedly different.
Airchat trades text for voice, but its Twitter and Threads inspirations are obvious.
I managed to score an invite to Airchat earlier this week and have spent a fair amount of time on the network. The newly resurrected app still trades text for voice. Yes, you read that right. You cannot input any text, and all user interactions are via voice. Still in beta, the entire vibe is more early days of Twitter than the current hell hole that X has become. Airchat’s fresh inspirations are pretty obvious, too. Launching the app drops you straight into a home page that looks like a cross between Threads and Twitter. From there, things get interesting quickly.
The app will immediately start playing back the first conversation in the feed alongside replies to the chat. It’s almost like overhearing a water cooler chat. The built-in AI is quick to transcribe voice chats and is astonishingly good at it. In fact, you can see it making corrections in real time. Tagging other users works by calling out their handles after you’ve voiced out your message. And no, there is no text input at all. It’s voice or nothing.
A small userbase and a voice-only model make Airchat an intimate network
Credit: Dhruv Bhutani / Android Authority
I’ve observed that the voice-first approach adds a certain intimacy to the conversations I’m having on Airchat. It’s a stark change from the relative anonymity of platforms like Twitter, where your virtual and actual identity don’t need any correlation. And I say this as someone who has only now switched to a real-world profile pic on Twitter.
The small user base and voice-based interactions give the app a tight-knit community feel.
The incredibly small and community-like user base also fuels the casual intimacy of conversations on Airchat. It’s hard to gauge the exact number, but based on my experience, the active user base appears to be in the low thousands at best. The app is constantly being updated, and a subreddit-style community feature was launched just the other day. The fitness community, likely to be one of the most popular, currently has just over 1800 users, further signaling how small the user base is on the invite-only app.
However, where Airchat differs from other apps is in the level of engagement. I’ve come across deeply insightful conversations, and adding an element of voice to the mix makes it seem like you’re talking to an actual person instead of a talking head or internet guru.
Of course, there are restrictions. Voice posts have now been limited to 5 minutes, ensuring Airchat doesn’t get misused as a podcasting platform. It probably also helps get people off their virtual soapbox. Elsewhere, the latest update has added video support. I haven’t tried that yet, nor have I seen anyone else give it a whirl.
Is Airchat worth trying out now?
Credit: Dhruv Bhutani / Android Authority
I can safely say that I’ve enjoyed using AirChat quite a bit, but despite that, I have my inhibitions about its future. A voice-based app faces one fundamental issue: privacy. Clubhouse gained steam when everyone was stuck at home and could talk freely. That’s not the case today. Sure, you can turn down the volume on your phone and read up on the latest conversations on the platform. But unlike Twitter, Instagram, or Reddit, you can’t actively participate in discussions without voice. This rules out sneaking in a conversation between meetings or even in a public space where talking aloud might not be acceptable. Late night doomscrolling on Twitter without waking up your partner just isn’t a possibility here.
Much as I'm enjoying my time with Airchat, it's hard to see it scale and compete against the existing juggernauts.
Moreover, I don’t see how this limitation can be addressed without changing the app’s fundamental ethos. And there’s no way the limitation won’t restrict the audience and scale. That said, I see nothing wrong with creating an app positioned towards a more niche and engaged audience. Airchat probably won’t be the Amazon of social media, but Etsy has a thriving user base, too.
My last four days with Airchat have been interesting, to say the least. While it’s too early to tell if it’s going to give serious competition to Big Social Media just yet, the new incarnation of Airchat might have enough staying power to become its own niche player.
Can anyone sign up for Airchat?
No, an existing member needs to invite you to the platform.
How many people can I invite to Airchat?
Every user can invite five additional members to Airchat.
Does airchat notify when you screenshot?
No, Airchat does not notify users when you take a screenshot.
Can you see who views your Airchat profile?
No, you cannot view who viewed your Airchat profile.
Does airchat have a website?
Airchat can only be accessed via the Android and iOS apps.
Who has the most followers on Airchat?
It’s hard to gauge who has the most followers on the platform, but the founder, Naval, likely has the maximum followers. At the time of publishing, Naval’s profile reported 12,000 followers.
Legislation that would force ByteDance to divest its ownership stake in TikTok to remain in the US has passed in the House of Representatives again.
The legislation is now headed to the Senate, included in an aid package for Ukraine and Israel.
The Senate is expected to vote on the bill next week, with President Joe Biden expected to sign it when it reaches his desk.
TikTok is now one step closer to facing a ban in the US if parent company ByteDance chooses not to divest its ownership stake. The ban could take effect as soon as in the next few days.
Despite massive lobbying efforts to keep TikTok in the US, the House of Representatives passed legislation today that would ban the app in the country, according to NBC News. The bill passed after 360 representatives voted in favor of the ban, with 58 saying no.
Now that the bill has passed the House once again, it will need approval from the Senate and the President to become law. It will arrive on the Senate floor next week as part of a crucial aid package for Ukraine and Israel. The measure is expected to pass, with President Joe Biden expected to sign it as soon as it reaches his desk.
This ban measure stems from bipartisan concerns about TikTok’s connection to China. There’s a fear that China could use the app to spread propaganda to the US audience. In addition, there are concerns about the massive amounts of data being collected on the millions of American users.
If the bill becomes law, ByteDance will have two options: sell TikTok or end its presence in the US. It’s likely ByteDance will exhaust all of its options before considering divesture.
Is TikTok's time finally up? On Saturday, the House of Representatives passed a measure that would require a change in the app's ownership or ban it if that doesn't happen.
Called the Protecting Americans from Foreign Adversary Controlled Applications Act, it's essentially the same divestiture-or-ban bill I wrote about in this newsletter back in March, now tucked into a larger bill (H.R. 8038, the insanely named 21st Century Peace through Strength Act) that deals with everything from fentanyl trafficking to Russian sanctions, Iranian petroleum, Hamas, and boatloads of foreign aid.
The most talked-about part of the Protecting Americans from Foreign Adversary Controlled Applications Act would ban TikTok unless it completely breaks ties with its Chinese parent-company, ByteDance, within 270 days.
But the bill goes far beyond TikTok, and could be used to justify a ban on all sorts of popular apps tied to China, Russia, Iran, or any other country that gets deemed a foreign adversary.
Specifically, the bill makes it illegal "to distribute, maintain, or update (or enable the distribution, maintenance, or updating of) a foreign adversary controlled application." And the bill's definition of "foreign adversary controlled application" is really broad.
It specifically defines TikTok, ByteDance, and subsidiaries or successors thereof as foreign adversary controlled applications.
The definition would also apply to an array of websites, apps, and "augmented or immersive technology" (with a focus on large social media entities), if they are headquartered in, principally based in, or organized under the laws of a foreign adversary country or if any person or entity with at least a 20 percent stake is based there.
And it would grant the president broad power to determine who meets this bill, opening the measure up for all sorts of potential abuse.
There are multiple ways in which this legislation likely violates the Constitution.
The most obvious constitutional problem is the First Amendment. The bill suppresses the free speech rights of Americans who post to TikTok and of those who consume TIkTok content.
It may also amount to a bill of attainder—a law punishing a specific person or entity, without a trial—and those are unconstitutional.
And it may also violate the 5th Amendment, as Sen. Rand Paul (R–Ky.) noted in a Reason article last week.
Paul thinks the Supreme Court "will ultimately rule it unconstitutional because it would violate the First Amendment rights of over 100 million Americans who use TikTok to express themselves," and "rule that the forced sale violates the Fifth Amendment. Under the Constitution, the government cannot take your property without accusing and convicting you of a crime—in short, without due process. Since Americans are part of TikTok's ownership, they will eventually get their day in court."
Paul's point brings up an important—and often overlooked—factor in all of this: No one has produced evidence of any specific legal infractions committed by TikTok, let alone proven such offenses took place. There's a ton of speculation about what TikTok could be doing, but that's it. A lot of people seem sure that TikTok is a tool of the Chinese Communist Party and you're a fool if you think otherwise. And maybe it is! But that still doesn't mean we can simply sanction the company with no due process, as Paul points out.
Speculation about what the app's ties to China mean may be a good reason for certain people to approach TikTok with caution. But they cannot justify legal action against TikTok.
More Sex & Tech News
• The coddling of the American parent: "Jonathan Haidt's new book…blames youth mental health issues on social media in a way that's easy, wrong, and dangerous," Mike Masnick writes in The Daily Beast.
Banning TikTok for real this time: On Saturday, the House passed bills that will send large sums of aid to Israel ($26 billion), Ukraine ($60 billion), and Taiwan ($8 billion), as well as a long-gestating measure to force the divestiture of the video app TikTok.
Now the legislation will need to be approved by the Senate and signed into law by President Joe Biden.
The TikTok ban will probably be challenged. "This is an unprecedented deal worked out between the Republican Speaker and President Biden," declared Michael Beckerman, TikTok's head of public policy, in a memo to the company's American staff. "At the stage that the bill is signed, we will move to the courts for a legal challenge."
China's internet regulator/censor, the Cyberspace Administration, has taken note of the movement on the TikTok bill, which would either ban the Chinese-owned company from operating in the U.S. or force sale of the app to an American owner within a tight timeframe. Forcing divestiture presents a few problems, namely that the proprietary algorithm and source code would likely fail to convey with the purchase, making the app…practically useless.
Not to be outdone by American lawmakers, China's government on Friday ordered that the Meta-owned WhatsApp and Threads be pulled from Apple's app store over "national security concerns" (of course). "A person briefed on the situation said the Chinese government had found content on WhatsApp and Threads about China's president, Xi Jinping, that was inflammatory and violated the country's cybersecurity laws," reportsThe New York Times. WhatsApp is used minimally compared to WeChat (owned by Chinese company Tencent). But for Apple—which anticipated this to some degree, and already started shifting its supply chain overseas after having been quite conciliatory to the Chinese Communist Party for many years—to be caught in the crosshairs is a harbinger of more to come.
This type of justification can always be found if one looks hard enough—and China's censors certainly do. But beware the coming internet wars, and the use of the American TikTok ban as justification for all manner of crackdowns.
Free and open internet? "A Russian opposition blogger, Aleksandr Gorbunov, posted on social media last month that Russia could use the move to shut down services like YouTube," arguesThe New York Times' David McCabe. "I don't think the obvious thing needs to be stated out loud, which is that when Russia blocks YouTube, they'll justify it with precisely this decision of the United States," said Gorbunov.
Xi's regime in China and Vladimir Putin's regime in Russia, of course, feel quite comfortable taking whatever cheap shots they can at U.S. lawmakers; if they want to crack down on internet freedoms, they can and will, no excuse necessary. But the TikTok bill is certainly escalatory, and it undermines America's longstanding rhetorical commitment to a free and open internet—or the internet as a "global free-trade zone," in the words of former President Bill Clinton.
Scenes from New York: Today is my birthday! And on Saturday, I went out with friends (including a grand total of three babies, who were shockingly well-behaved) to eat crab in Chinatown. After that we went to an event in a basement on East Broadway where the books attempted to teach my toddler that rules are for breaking! Marginally better than Ibram X. Kendi's children's books, but not by much.
QUICK HITS
New York just passed the Local Journalism Sustainability Act, which sets aside $30 million annually to incentivize hiring new local journalists. "The late addition to the $237 billion budget allows eligible outlets to receive a 50 percent refundable credit for the first $50,000 of a journalist's salary, up to a total of $300,000 per outlet," reportsPolitico. I think it would be fun to troll the legislators by being one of the beneficiaries of this program and then choosing to be the most aggressive muckraker that ever was, scavenging through their records, making them rue the day they were born, etc.
Tubal ligation and vasectomy trends since Dobbs. Will that Supreme Court decision, which led to abortion being returned to the states (and many states choosing to institute crackdowns), end up actually leading to a lower fertility rate?
Children in elementary schools all over Poland have been freed from the shackles of homework.
Protests at Columbia have prompted an Orthodox rabbi on campus sent this message to students:
In response to "horrific" scenes of antisemitic harassment at and around campus, the Orthodox Rabbi at Columbia/Barnard sent a WhatsApp message to more than 290+ Jewish students this morning recommending that they go home until it's safe again for them on campus: pic.twitter.com/uqAntEICLv
The Cass review—a four-year review of the evidence on child gender transitions that has led the U.K.'s National Health Service to substantially alter its guidance—isn't important enough for Scientific American to cover, apparently:
Scientific American doesn't cover the Cass Review -- "cass report" and "cass review" net zero Google hits -- but instead, the week after its release, it publishes an interview with an activist who believes kids should have full medical automony and that interpreting scientific… https://t.co/C7C19zxYKT
The idea of voice-based social networks isn’t new. Clubhouse boomed during the Covid era and floundered soon after. However, Clubhouse’s moment of fame led to several copycats and even spurred the addition of voice to existing platforms like Twitter.
On the face of it, Airchat (Play Store) is yet another take on the old formula. So, why has it gotten so much attention recently?
Legislation that would force ByteDance to divest its ownership stake in TikTok to remain in the US has passed in the House of Representatives again.
The legislation is now headed to the Senate, included in an aid package for Ukraine and Israel.
The Senate is expected to vote on the bill next week, with President Joe Biden expected to sign it when it reaches his desk.
TikTok is now one step closer to facing a ban in the US if parent company ByteDance chooses not to divest its ownership stake. The ban could take effect as soon as in the next few days.
Despite massive lobbying efforts to keep TikTok in the US, the House of Representatives passed legislation today that would ban the app in the country, according to NBC News. The bill passed after 360 representatives voted in favor of the ban, with 58 saying no.
The censors who abound in Congress will likely vote to ban TikTok or force a change in ownership. It will likely soon be law. I think the Supreme Court will ultimately rule it unconstitutional, because it would violate the First Amendment rights of over 100 million Americans who use TikTok to express themselves.
In addition, I believe the Court will rule that the forced sale violates the Fifth Amendment. Under the Constitution, the government cannot take your property without accusing and convicting you of a crime—in short, without due process. Since Americans are part of TikTok's ownership, they will eventually get their day in court.
The Court could also conclude that naming and forcing the sale of a specific company amounts to a bill of attainder, legislation that targets a single entity.
These are three significant constitutional arguments against Congress' forced sale/ban legislation. In fact, three different federal courts have already invalidated legislative and executive attempts to ban TikTok.
If the damage to one company weren't enough, there is a very real danger this ham-fisted assault on TikTok may actually give the government the power to force the sale of other companies.
Take, for example, Apple. As The New York Timesreported in 2021, "In response to a 2017 Chinese law, Apple agreed to move its Chinese customers' data to China and onto computers owned and run by a Chinese state-owned company."
Sound familiar? The legislators who want to censor and/or ban TikTok point to this same law to argue that TikTok could (someday) be commanded to turn over American users' data to the Chinese government.
Note that more careful speakers don't allege that this has happened, but rather that it might. The banners of TikTok don't want to be troubled by anything inconvenient like proving in a court of law that this is occurring. No, the allegation is enough for them to believe they have the right to force the sale of or ban TikTok.
But back to Apple. It's not theoretical that it might turn over data to the Chinese Communist government. It already has (albeit, Chinese users' information). Nevertheless, it could be argued that Apple, by their actions, could fall under the TikTok ban language that forces the sale of an entity: under the influence of a foreign adversary.
(Now, of course, I think such legislation is absurdly wrong and would never want it applied to Apple, but I worry the language is vague enough to apply to many entities.)
As The New York Times explains: "Chinese government workers physically control and operate the data center. Apple agreed to store the digital keys that unlock its Chinese customers' information in those data centers. And Apple abandoned the encryption technology it uses in other data centers after China wouldn't allow it."
This sounds exactly like what the TikTok censors describe in their bill, except so far as we know, only Americans who live in China might be affected by Apple's adherence to China's law. TikTok actually has spent a billion dollars agreeing to house all American data with Oracle in Texas.
Are there other companies that might be affected by the TikTok ban? Commentary by Kash Patel in The Washington Timesargues that Temu, an online marketplace operated by a Chinese company, is even worse than TikTok and should be banned. He makes the argument that Temu, in contrast with TikTok, "does not employ any data security personnel in the United States."
And what of the global publishing enterprise Springer Nature? It has admitted that it censors its scientific articles at the request of the Chinese Communist government. Will the TikTok bill force its sale as well?
Before Congress rushes to begin banning and punishing every international company that does business in China, perhaps they should pause, take a breath, and ponder the ramifications of rapid, legislative isolationism with regard to China.
The impulse to populism is giving birth to the abandonment of international trade. I fear, in the hysteria of the moment, that ending trade between China and the U.S. will not only cost American consumers dearly but ultimately lead to more tension and perhaps even war.
No one in Congress has more strongly condemned the historical famines and genocides of Communist China. I wrote a book, The Case Against Socialism, describing the horrors and inevitability of state-sponsored violence in the pursuit of complete socialism. I just recently wrote another book called Deception, condemning Communist China for covering up the Wuhan lab origins of COVID-19.
And yet, even with those searing critiques, I believe the isolationism of the China hysterics is a mistake and will not end well if Congress insists on going down this path.
The Supreme Court is currently considering two cases in which social media firms challenge the constitutionality of Texas and Florida laws requiring them to host content the platforms would prefer to exclude. The issue before the Court is whether these laws violate the Free Speech Clause of the First Amendment. But, in a recent Reason article, Ethan Blevins of the Pacific Legal Foundation—one of the nation's leading public interest law firms litigating takings cases—argues they also violate the Takings Clause of the Fifth Amendment:
While pundits and lawyers cross swords over free speech on social media, a quieter yet critically important principle is being ignored: property rights. In addition to violating the First Amendment, the rush to force social media platforms to host content violates the Fifth Amendment as well—in particular, the Takings Clause.
The Takings Clause says that government shall not take private property "for public use, without just compensation." While many are familiar with the clause's importance when the government wants to seize land through eminent domain, courts have also applied this right as a limit on the ability to overregulate property. For example, if a beach town requires the owners of oceanfront properties to let the public walk across their yards to get to the beach, this would require compensation, because the regulation effectively takes the property owner's right to exclude, a cornerstone of ownership.
Likewise, the Takings Clause shields social media platforms from regulations requiring they host content or users they want to exclude. These platforms have as much right to eject unwelcome digital interlopers as homeowners do to stop the government from using their yard as a public right of way—unless they are given just compensation. If states intend to force social media apps to host users and content against their wishes, they will have to pay for it….
Both state and federal laws already treat online platforms as property. All states criminalize unauthorized access to computer systems, often expressly framing these crimes as trespass….
Laws that mandate online platforms to accept certain content or users effectively invade private property. And the courts have established that when the government grants third parties access to private property without the owner's consent, that requires compensation. The federal government had to pay a private marina owner in Hawaii before it could be compelled to allow public boating access. Similarly, the Supreme Court ruled just a few years ago that California had to compensate employers after it forced them to let union representatives access their property.
I very much agree, and previously made a similar argument here:
The Takings Clause bars government from taking "private property" without paying "just compensation." In its 2021 ruling in Cedar Point Nursery v. Hassid, the Supreme Court ruled (correctly, in my view) that even a temporary government-mandated "physical occupation" or invasion of private property counts as a per se taking….
The Florida and Texas social media laws are also blatant attacks on the right to exclude. No one doubts that the Twitter site and its various features are Twitter's private property. And the whole point of the Florida and Texas laws is to force Twitter and other social media firms to grant access to users and content the firms would prefer to exclude, particularly various right-wing users. Just as the plaintiffs in Cedar Point wanted to bar union organizers from their land, so Twitter wishes to bar some content it finds abhorrent (or that might offend or annoy other users)….
To be sure, there are obvious differences between virtual property, such as a website, and more conventional physical property, like that involved in the Cedar Point case. But the Taking Clause nonetheless applies to both. If Texas decided to seize the Twitter site, bar current users, and instead fill it with content praising the state government's policies, that would pretty obviously be a taking, much like if California decided to seize the Cedar Point tree nursery's land. In the same way, requiring Twitter to host unwanted content qualifies as an occupation of its property, no less than requiring a landowner to give access to unwanted entrants…
One could argue that forcing a website owner to host unwanted users isn't really a "physical occupation," because the property is virtual in nature. But websites, including the big social media firms, use physical server space. Other things equal, a site with more user-generated content requires more such space than one with less. Even aside from the connection to physical infrastructure, it seems to me that occupation of virtual "real estate" is analogous to occupation of land. Both are valuable forms of private property from which the owner generally has a right to exclude.
Who is Katherine Maher, and what does she really believe? The embattled NPR CEO had the opportunity on Wednesday to set the record straight regarding her views on intellectual diversity, "white silence," and whether Hillary Clinton (of all people) committed nonbinary erasure when she used the phraseboys and girls.
Unfortunately, during a recent appearance at the Carnegie Endowment for International Peace to discuss the journalism industry's war on disinformation, she repeatedly declined to give straight answers—instead offering up little more than platitudes about workplace best practices. I attended the event and submitted questions that the organizers effectively ignored.
That's a shame, because Maher's views certainly require clarity—especially now that longtime editor Uri Berliner has resigned from NPR and called out the publicly funded radio channel's CEO. In his parting statement, Berliner slammed Maher, saying that her "divisive views confirm the very problems" that he wrote about in his much-discussed article for Bari Weiss' Free Press.
Berliner's tell-all mostly took aim at specific examples of NPR being led astray by its deference to progressive shibboleths: the Hunter Biden laptop, COVID-19, etc. He implored his new boss—Maher's tenure as CEO had only begun about four weeks ago—to correct NPR's lack of viewpoint diversity. That's probably a tall order, since Maher had once tweeted that ideological diversity is "often a dog whistle for anti-feminist, anti-POC stories."
That Silicon Valley v Russia thread was pretty funny — until it got onto ideological diversity. In case it's not evident, in these parts that's often a dog whistle for anti-feminist, anti-POC stories about meritocracy. Maybe's not what the author meant. But idk, maybe it is?
Indeed, Maher's past tweets would be hard to distinguish from satire if one randomly stumbled across them. Her earnest, uncompromising wokeness—land acknowledgments, condemnations of Western holidays, and so on—sounds like they were written by parody accounts such as The Babylon Bee or Titania McGrath. In her 2022 TED Talk, she faulted Wikipedia, where she worked at the time, for being a Eurocentric written reference that fails to take into account the oral histories of other peoples. More seriously, she seems to view the First Amendment as an inconvenient barrier for tackling "bad information" and "influence peddlers" online.
But interestingly, she did not reiterate any of these views during her appearance at the Carnegie Endowment on Wednesday. On the contrary, she gave entirely nonspecific answers about diversity in the newsroom. In fact, she barely said anything concrete about the subject of the discussion: disinformation.
When asked by event organizer Jon Bateman, a Carnegie senior fellow, to address the Berliner controversy, she said that she had never met him and was not responsible for the editorial policies of the newsroom.
"The newsroom is entirely independent," she said. "My responsibility is to ensure that we have the resources to do this work. We have a mandate to serve all Americans."
She repeated these lines over and over again. When asked more specifically about whether she thinks NPR is succeeding or failing at making different viewpoints welcome, she pointed to the audience and said that her mission was to expand the outlet's reach.
"Are we growing our audiences?" she asked. "That is so much more representative of how we are doing our job, because I am not in the newsroom."
Many of the people who are in the newsroom clearly had it out for Berliner. In a letter to Maher, signed by 50 NPR staffers, they called on her to make use of NPR's "DEI accountability committee" to silence internal criticism. Does Maher believe that a diversity, equity, and inclusion task force should vigorously root out heresy?
At the event, Maher did not directly take audience questions. Instead, audience members were asked to write out their questions and submit them via QR code. I asked her whether she stood by her previous tweet that maligned the concept of ideological diversity, as well as the other tweets that had recently made the news. Frustratingly, she offered no further clarity on these subjects.
This Week on Free Media
In the latest episode of our new media criticism show for Reason, Amber Duke and I discussed the Berliner situation in detail. We also reacted to a Bill Maher monologue on problems with liberal governance, tackled MSNBC's contempt for laundry-related liberty, and chided Sen. Tom Cotton (R–Ark.) for encouraging drivers to throw in-the-way protesters off bridges.
This Week on Rising
Briahna Joy Gray and I argued about the Berliner situation—and much else—on Rising this week. Watch below.
Worth Watching (Follow-Up)
I have finally finished Netflix's 3 Body Problem, which went off the rails a bit in its last few episodes. I still highly recommend the fifth episode, "Judgment Day," for including one of the most haunting television sequences of the year thus far.
But I have questions about the aliens. (Spoilers to follow.)
In 3 Body Problem, a group of scientists must prepare Earth for war against the San Ti, an advanced alien race that will arrive in 400 years. The San Ti have sent advanced technology to Earth that allows them to closely monitor humans and co-opt technology—screens, phones, presumably weapons systems—for their own use. We are led to believe that the San Ti want to kill humans because unlike them, we are liars. Eccentric oil CEO Mike Evans (Jonathan Pryce), a human fifth columnist who communicates with the San Ti, appears to doom our species when he tells the aliens the story of Little Red Riding Hood. The San Ti are so offended by the Big Bad Wolf's deceptions that they decide earthlings can't ever be trusted, and should instead be destroyed. "We cannot coexist with liars," says the San Ti's emissary. "We are afraid of you."
The scene in which Evans realizes what he has done makes for gripping television but… I'm sorry, it's nonsensical. Clearly the San Ti already understand deception, misdirection, and the difference between a made-up story and what's really happening. After all, they were the ones who equipped Evans and his collaborators with the virtual reality video game technology they use to recruit more members. The game does not literally depict the fate of the San Ti's home world; it uses metaphor, exaggeration, and human imagery to convey San Ti history. It doesn't make any sense that they would be utterly flummoxed by the Big Bad Wolf.
Then, in the season finale, the San Ti use trickery to taunt the human leader of the resistance. They are the liars, but no one ever calls them out on this.
Senator Maggie Hassan wrote to Meta and other platforms asking what they're doing to protect girls after The New York Times found some parents posting suggestive images of their daughters online.
In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.
With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”
Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.
The most obvious weakness is that Meta’s system will work only if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Most unsecured “open-source” generative AI tools don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.
We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock. Given that it takes about 2 seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.
When the authors uploaded an image they’d generated to a website that checks for watermarks, the site correctly stated that it was a synthetic image generated by an OpenAI tool. IEEE Spectrum
We know this because we were able to easily remove the watermarks Meta claims it will detect—and neither of us is an engineer. Nor did we have to write a single line of code or install any software.
First, we generated an image with OpenAI’s DALL-E 3. Then, to see if the watermark worked, we uploaded the image to the C2PA content credentials verification website. A simple and elegant interface showed us that this image was indeed made with OpenAI’s DALL-E 3. How did we then remove the watermark? By taking a screenshot. When we uploaded the screenshot to the same verification website, the verification site found no evidence that the image had been generated by AI. The same process worked when we made an image with Meta’s AI image generator and took a screenshot of it—and uploaded it to a website that detects the IPTC metadata that contains Meta’s AI “watermark.”
However, when the authors took a screenshot of the image and uploaded that screenshot to the same verification site, the site found no watermark and therefore no evidence that the image was AI generated. IEEE Spectrum
Is there a better way to identify AI-generated content?
Meta’s announcement states that it’s “working hard to develop classifiers that can help...to automatically detect AI-generated content, even if the content lacks invisible markers.” It’s nice that the company is working on it, but until it succeeds and shares this technology with the entire industry, we will be stuck wondering whether anything we see or hear online is real.
For a more immediate solution, the industry could adopt maximally indelible watermarks—meaning watermarks that are as difficult to remove as possible.
Today’s imperfect watermarks typically attach information to a file in the form of metadata. For maximally indelible watermarks to offer an improvement, they need to hide information imperceptibly in the actual pixels of images, the waveforms of audio (Google Deepmind claims to have done this with its proprietary SynthID watermark) or through slightly modified word frequency patterns in AI-generated text. We use the term “maximally” to acknowledge that there may never be a perfectly indelible watermark. This is not a problem just with watermarks though. The celebrated security expert Bruce Schneier notes that “computer security is not a solvable problem…. Security has always been an arms race, and always will be.”
In metaphorical terms, it’s instructive to consider automobile safety. No car manufacturer has ever produced a car that cannot crash. Yet that hasn’t stopped regulators from implementing comprehensive safety standards that require seatbelts, airbags, and backup cameras on cars. If we waited for safety technologies to be perfected before requiring implementation of the best available options, we would be much worse off in many domains.
There’s increasing political momentum to tackle deepfakes. Fifteen of the biggest AI companies—including almost every one mentioned in this article—signed on to the White House Voluntary AI Commitments last year, which included pledges to “develop robust mechanisms, including provenance and/or watermarking systems for audio or visual content” and to “develop tools or APIs to determine if a particular piece of content was created with their system.” Unfortunately, the White House did not set any timeline for the voluntary commitments.
Then, in October, the White House, in its AI Executive Order, defined AI watermarking as “the act of embedding information, which is typically difficult to remove, into outputs created by AI—including into outputs such as photos, videos, audio clips, or text—for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.”
Next, at the Munich Security Conference on 16 February, a group of 20 tech companies (half of which had previously signed the voluntary commitments) signed onto a new “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.” Without making any concrete commitments or providing any timelines, the accord offers a vague intention to implement some form of watermarking or content-provenance efforts. Although a standard is not specified, the accord lists both C2PA and SynthID as examples of technologies that could be adopted.
Could regulations help?
We’ve seen examples of robust pushback against deepfakes. Following the AI-generated Biden robocalls, the New Hampshire Department of Justice launched an investigation in coordination with state and federal partners, including a bipartisan task force made up of all 50 state attorneys general and the Federal Communications Commission. Meanwhile, in early February the FCC clarified that calls using voice-generation AI will be considered artificial and subject to restrictions under existing laws regulating robocalls.
Unfortunately, we don’t have laws to force action by either AI developers or social media companies. Congress and the states should mandate that all generative AI products embed maximally indelible watermarks in their image, audio, video, and text content using state-of-the-art technology. They should also address risks from unsecured “open-source” systems that can either have their watermarking functionality disabled or be used to remove watermarks from other content. Furthermore, any company that makes a generative AI tool should be encouraged to release a detector that can identify, with the highest accuracy possible, any content it produces. This proposal shouldn’t be controversial, as its rough outlines have already been agreed to by the signers of the voluntary commitments and the recent elections accord.
Standards organizations like C2PA, the National Institute of Standards and Technology, and the International Organization for Standardization should also move faster to build consensus and release standards for maximally indelible watermarks and content labeling in preparation for laws requiring these technologies. Google, as C2PA’s newest steering committee member, should also quickly move to open up its seemingly best-in-class SynthID watermarking technology to all members for testing.
Misinformation and voter deception are nothing new in elections. But AI is accelerating existing threats to our already fragile democracy. Congress must also consider what steps it can take to protect our elections more generally from those who are seeking to undermine them. That should include some basic steps, such as passing the Deceptive Practices and Voter Intimidation Act, which would make it illegal to knowingly lie to voters about the time, place, and manner of elections with the intent of preventing them from voting in the period before a federal election.
Congress has been woefully slow to take up comprehensive democracy reform in the face of recent shocks. The potential amplification of these shocks through abuse of AI ought to be enough to finally get lawmakers to act.
With Facebook being one of the oldest social media platforms around, we are all bound to find some old posts that embarrass us. Therefore, you might want to turn over a new leaf and remove all your posts, especially when applying for a new job. Here’s how to delete all your Facebook posts without deleting your account, allowing you to start fresh. Additionally, before deleting these posts, you may be interested in downloading them using a third-party app.
How to delete all your Facebook posts on desktop
If you’re using a desktop computer, open your Facebook profile page and click the three-dot button on the right-hand side to open menu options.
From there, click on “Activity log.”
On the next page, navigate to Your Activity Across Facebook > Posts > Your Posts in the left-hand menu. You can specify the types of posts you want to see, such as photos, videos, or posts from other apps.
Select All and then Recycle Bin to delete all your posts. Please note that if you have a long history of using Facebook, you may need to scroll down multiple times for all your posts to load.
Alternatively, you can choose Activity You’re Tagged In and then click All followed by Remove Tags to remove any embarrassing posts in which your friends or family have tagged you from your profile.
How to solve the “Technical Error” message when deleting posts on Facebook
If you encounter a technical error doing the Activity Log method, there is another way to delete Facebook posts via the Manage Posts function.
From your profile page, click on the Manage Posts button.
Click Select All then Next.
Tick the Delete Posts option.
Click Done.
How to delete all your Facebook posts on mobile
The process is pretty much the same using the mobile app.
First, navigate your profile and tap the three-dot icon on the left.
From there, select Activity Log from your Profile Settings.
On the next page, select Your Activity Across Facebook to sort through the things you’ve already posted such as your own posts on your own timeline, your posts on other people’s timelines, check-ins and other hidden posts.
Tap All and then the Recycle Binbutton at the bottom right of the screen to delete all posts or tags.
Deleting all your Facebook posts is as simple as that. It’s almost like you are a whole new person.
FAQs
What is Archive on Facebook?
Archiving a post is a way to hide it from your profile while keeping its likes and comments. In other words, only you will be able to see a post you have archived. You can Archive photos by clicking on the Archive button instead of the Recycle Bin button at the Manage Posts page.
Can I delete a shared album on Facebook?
Yes, photos in an album you created and shared with others can be archived or deleted. However, for albums other people created and was shared with you as a contributor, your only option is to leave it. Any photos you’ve shared in that album will remain there unless you delete it from your account.
Can I delete posts I am tagged in on Facebook?
No, posts can only be archived or deleted by the person who created them. However, you can remove the tag, so the post doesn’t appear on your profile.
X alleges that the Center for Countering Digital Hate cost it millions by showing that hate speech was spreading on the platform. In a hearing Thursday, a federal judge sounded skeptical of those claims.
Earlier this week, the official Runescape Twitter/X account got banned. It’s since returned, but the reason behind its temporary ban was that someone at the social media company thought the account was created by an eight-year-old kid, which would be a violation of Twitter’s rules.
If you use X (formerly known as Twitter), you've seen the "potentially sensitive content" warnings on some posts. It's not surprising on a platform used worldwide, where anyone can post anonymously from a budget mobile phone. X lets you choose what you want to see on the platform, whether it's adult content or nudity.
Pastor Brian Houston — famous former head of the Hillsong Christian megachurch in Australia — is the victim of Twitter hack!
Here are the facts:
On February 20, 2024 at 11:41 pm, a devious hacker broke into Pastor Houston's account and posted, "Ladies and girls kissing"
Sixteen minutes later, Houston was made aware of the breach and he posted, "I think my twitter may have been hacked"
While some uncharitable folks are saying the kindly Pastor mistakenly thought he was typing "Ladies and girls kissing" into a web browser with the safe search filter disabled, that is not the case. — Read the rest