FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremTechdirt
  • ✇Techdirt
  • Court Quickly Dismisses Copyright Suit Against Comedy Central Over Non-Protectable ElementsDark Helmet
    The idea/expression dichotomy strikes again! There is a misconception among some as to how copyright works, specifically in terms of what is protected under copyright and what is not. This has been distilled down to the afore-mentioned dichotomy, where general ideas do not enjoy the protection of copyright, whereas specific expressions do. So, an anthropomorphic mouse-hero that wears a cape and can fly is an idea that is not protectable, while the specific depiction of Mighty Mouse (just to age
     

Court Quickly Dismisses Copyright Suit Against Comedy Central Over Non-Protectable Elements

11. Červen 2024 v 04:58

The idea/expression dichotomy strikes again! There is a misconception among some as to how copyright works, specifically in terms of what is protected under copyright and what is not. This has been distilled down to the afore-mentioned dichotomy, where general ideas do not enjoy the protection of copyright, whereas specific expressions do. So, an anthropomorphic mouse-hero that wears a cape and can fly is an idea that is not protectable, while the specific depiction of Mighty Mouse (just to age myself) is protectable.

I don’t expect every member of the public, nor even every content creator out there to know this sort of thing. I do, however, expect the lawyers they contact in an attempt to file loser lawsuits to know it. Sadly, these suits make their way into the courts far too often and are nearly as often dismissed on summary judgement. Such seems to be the case with Daniel Kassel’s lawsuit against Comedy Central over two works that feature a talking manatee that has life problems and a human girlfriend.

The work at the center of the lawsuit is Happily Everglades After, a storyboard animatic posted to YouTube by author Daniel Kassel. The work, originally titled Jukebox Manatee, tells the story a manatee protagonist with a laid-back attitude and a human girlfriend who, in the words of the complaint “suffers misfortunes as a commentary on life and its travails with irony and black humor.” The author finalized the work, after “brainstorming” with fellow students in the Pratt Institute, in 2018.

But the author did not take a laid-back approach to the release of Loafy, a one-season adult animated comedy premiering on the Comedy Central digital platforms in August 2020. The show, created by former Saturday Night Live cast member Bobby Moynihan and made by production company Cartuna, is (according to its publicity materials) “a semi-improvised animated series about a weed-dealing manatee who spends his days getting high and lazing around his tank at the Central Park Zoo.” Kevin Smith, Jason Mewes, Gina Gershon, and Tom Green, among others, provide voiceovers for the series.

As the court noted in its ruling in favor of the defense on summary judgement, some of those specific elements and many, many more are at the heart of why this failed as a matter of copyright infringement. I imagine that part of the reason this action was brought in the first place is that a couple of people that worked on Loafy at Cartuna were classmates of Kassel earlier in life. Perhaps he thought that created some link or served as further evidence of infringement. Comedy Central didn’t even bother suggesting that it or the production company never had access to Kassel’s work, in fact.

But as the court went on to say, and I paraphrase: so fucking what? The characters in Kassel’s complaint are not described in protectable elements, the court ruled. A talking manatee with a human girlfriend that talks about his life? That’s an idea, not a specific expression. Hell, Kassel’s work is a four minute cartoon short, whereas Loafy is an episodic series. They’re substantially different in terms of length, themes, and specific expression in a number of ways, up to and including the very nature and length of each work.

And even if the author’s manatee character were unique enough to be protectable, the court found, the “total concept and feel of the two works” was not substantially similar in any event. The original animated short was a four-minute work drawn in stop-motion animation, set in the Everglades, featuring a manatee character who “appears optimistic despite being run over and physically scarred by humans in a boat.” The Comedy Central work was an eight-part animated series, set in a dilapidated zoo near Central Park, featuring “a foul-mouthed and crude drug dealer” whose humor was “designed to leave the viewer laughing.” These and other differences in concept and feel made the author’s claim implausible even on the pleadings.

Kassel brought two other claims, one for unfair competition and another for deceptive acts and practices. The court hand-waved away both. The unfair competition claim failed because it was, again, about ideas and concepts, rather than a specific “tangible good.” The deceptive practices claim failed as well, basically because it essentially reiterated the claim of copyright infringement.

And so this lawsuit goes in the waste bin, wasting the time of everyone involved and, presumably, some money that Kassel would have spent on his lawyers. Lawyers who really should either be giving their client better advice on claims like this, or refusing to take such loser cases on.

  • ✇Techdirt
  • Seventh Circuit Shrugs, Says The Odor Of Legal Weed Can Justify A Warrantless Vehicle SearchTim Cushing
    “Odor of marijuana” still remains — even in an era of widespread legalization — a favorite method of justifying warrantless searches. It’s an odor, so it can’t be caught on camera, which are becoming far more prevalent, whether they’re mounted to cop cars, pinned to officers’ chests, or carried by passersby. Any claim an odor was detected pits the officer’s word against the criminal defendant’s. Even though this is a nation where innocence is supposed to be presumed, the reality of the criminal
     

Seventh Circuit Shrugs, Says The Odor Of Legal Weed Can Justify A Warrantless Vehicle Search

11. Červen 2024 v 00:29

“Odor of marijuana” still remains — even in an era of widespread legalization — a favorite method of justifying warrantless searches. It’s an odor, so it can’t be caught on camera, which are becoming far more prevalent, whether they’re mounted to cop cars, pinned to officers’ chests, or carried by passersby.

Any claim an odor was detected pits the officer’s word against the criminal defendant’s. Even though this is a nation where innocence is supposed to be presumed, the reality of the criminal justice system is that everyone from the cops to the court to the jury tend to view people only accused of crimes as guilty.

But this equation changed a bit as states and cities continued to legalize weed possession. Once that happened, the claim that the “odor” of marijuana had been “detected” only meant the cops had managed to detect the odor of a legal substance. The same thing for their dogs. Drug dogs are considered the piece de resistance in warrantless roadside searches — an odor “detected” by a four-legged police officer that’s completely incapable of being cross-examined during a jury trial.

As legalization spreads, courts have responded. There have been handful of decisions handed down that clearly indicate what the future holds: cops and dog cops that smell weed where weed is legal don’t have much legal footing when it comes to warrantless searches. Observing something legal has never been — and will never be — justification for a search, much less reasonable suspicion to extend a stop.

The present has yet to arrive in the Seventh Circuit. Detecting the odor of a legal substance is still considered to be a permission slip for a warrantless search. And that’s only because there’s one weird stipulation in the law governing legal marijuana possession in Illinois.

In this case, a traffic stop led to the “detection” of the odor of marijuana. That led to the driver fleeing the traffic stop and dropping a gun he was carrying. And that led to felon-in-possession charges for Prentiss Jackson, who has just seen his motion to suppress this evidence rejected by the Seventh Circuit Appeals Court.

Here’s how this all started, as recounted in the appeals court decision [PDF]:

The officer smelled the odor of unburnt marijuana emanating from the car. He knew the odor came from inside the car, as he had not smelled it before he approached the vehicle. During their conversation about the license and registration, the officer told Jackson he smelled “a little bit of weed” and asked if Jackson and the passenger had been smoking. Jackson said he had, but that was earlier in the day, and he had not smoked inside the car.

Through the officer’s training, he knew the most common signs of impairment for driving under the influence were the odor of marijuana or alcohol and speech issues. He was also taught to look for traffic violations. Concerned that Jackson might be driving under the influence because of the head and taillight violation, the odor of marijuana, and Jackson’s admission that he had smoked earlier, the officer asked Jackson whether he was “safe to drive home.” Jackson said he was. His speech was not slurred during the interaction, and his responses were appropriate.

Now, I’m not a federal judge. (And probably shouldn’t be one, for several reasons.) But I think I would have immediately called bullshit here. According to the officer’s own statements, his “training” led him to believe things like unburnt marijuana and unlit headlights/taillights are indicators of “driving under the influence.” I would have asked for the officer to dig deep into the reserves of his “training” to explain these assertions. The only one that fits is Jackson’s admission he had smoked “earlier.” And, even with that admission, Jackson cleared the impairment test.

The officer, however, insisted he had probable cause to engage in a warrantless search of the car, based exclusively on his detection of the odor of “unburnt” marijuana. The officer told Jackson he was going to cite him for weed possession (not for the amount, but for how it was stored in the car). He also told the passenger he would make an arrest if Jackson did not “agree” to a “probable cause search.”

Jackson moved to the back of his car as ordered by the officer. Shortly before the patdown began, Jackson fled, dropping a handgun he was not legally allowed to possess.

Jackson challenged the search in his motion to suppress, arguing that marijuana legalization meant an assertion that the odor of a (legal) drug had been detected by an officer meant nothing in terms of probable cause for a warrantless search. The lower court rejected Jackson’s argument. The Seventh Circuit Appeals Court agrees with the trial court.

First, the court says marijuana, while legal in Illinois, is still illegal under federal law. And the suspicion a federal law has been broken (even if it can’t be enforced locally) is still enough to justify further questions and further exploration of a car.

Furthermore, state requirements for transporting legal marijuana in personal vehicles were not met by Jackson’s baggies of personal use weed.

[T]he [Illinois] Vehicle Code […] clearly states that when cannabis is transported in a private vehicle, the cannabis must be stored in a sealed, odor-proof container—in other words, the cannabis should be undetectable by smell by a police officer.”

That’s a really weird stipulation. It basically tells residents that in order to legally transport drugs they must act like drug smugglers. And, while I haven’t seen a case raising this issue yet, one can be sure people have been criminally charged for following the law because officers believe efforts made to prevent officers from detecting drugs is, at the very least, reasonable suspicion to extend a stop or, better yet, probable cause to engage in a warrantless search.

And this is likely why that particular stipulation (which I haven’t seen in other places where weed is legal) was included in this law: it doesn’t remove one of the handiest excuses to perform a warrantless search — the “odor of marijuana.”

The smell of unburnt marijuana outside a sealed container independently supplied probable cause and thus supported the direction for Jackson to step out of the car for the search.

That’s pretty handy… at least for cops. It allows them to “detect” the odor of a legal substance in order to treat it as contraband. And they need to do little more than claim in court they smelled it — something that’s impossible to disprove. Illinois has managed to do the seemingly impossible: it has legalized a substance while allowing law enforcement officers to treat it as illegal. That’s quite the trick. And because of that, it’s still perfectly legal to pretend legal substances are contraband when it comes to traffic stops in Illinois.

  • ✇Techdirt
  • Elon Sued His Critics, But Reporters Keep Exposing How He’s Monetizing HateMike Masnick
    There’s a type of marginally frustrating reporting where a reporter searches social media for [insert bad thing], finds some examples of said [bad thing], and writes a story about “This Platform Allows [Bad Thing]” followed by lots of public commentary about how the platforms don’t care/don’t do enough, etc. etc. Let me let you in on a little secret: there are more [bad things] on the internet than you can reasonably think of. If you come up with a big enough list of [bad things] to block, peopl
     

Elon Sued His Critics, But Reporters Keep Exposing How He’s Monetizing Hate

10. Červen 2024 v 21:44

There’s a type of marginally frustrating reporting where a reporter searches social media for [insert bad thing], finds some examples of said [bad thing], and writes a story about “This Platform Allows [Bad Thing]” followed by lots of public commentary about how the platforms don’t care/don’t do enough, etc. etc.

Let me let you in on a little secret: there are more [bad things] on the internet than you can reasonably think of. If you come up with a big enough list of [bad things] to block, people will just come up with more [bad things] you haven’t thought of. People are creative that way.

These stories are a mixed bag. They are accurate but not particularly enlightening. In our latest Ctrl-Alt-Speech, former Twitter Head of Trust & Safety Yoel Roth and I discussed these kinds of stories a little bit. He noted companies should do more internal red teaming, but solely to prevent such negative PR hits, rather than as an actual trust & safety strategy.

However, I’m reporting on the latest from NBC because it’s about ExTwitter allowing ads on hateful hashtags like #whitepower, #whitepride, and #unitethewhite.

Elon Musk’s social media app X has been placing advertisements in the search results for at least 20 hashtags used to promote racist and antisemitic extremism, including #whitepower, according to a review of the platform. 

NBC News found the advertisements by searching various hashtags used to promote racism and antisemitism, and by browsing X accounts that often post racial or religious hatred. The hashtags vary from obvious slogans such as #whitepride and #unitethewhite to more fringe and coded words such as #groyper (a movement of online white nationalists) and #kalergi (a debunked theory alleging a conspiracy to eliminate white people from Europe).

Elon could make a reasonable response: that while this looks bad, the simple reality is that it is simply impossible to figure out every possible awful hashtag and prevent ads from running against them.

It’s easy to see a few hashtags and say “gosh, that’s awful, how could that happen,” without realizing that millions of hashtags are used every day. Even if ExTwitter came up with a blocklist of “bad” hashtags, some would still get through and eventually some reporter would find it and report on it.

But Elon or ExTwitter never gives that response, as it would involve admitting the truth about how content moderation works. Musk and his supporters have long denied this truth as part of their willful misunderstanding of trust & safety work.

In this case, it’s still noteworthy, given that Elon has publicly promised that no “negative/hate tweets” will be monetized.

Image

Even worse, when organizations like the Center for Countering Digital Hate and Media Matters for America pointed out similar failures to live up to that policy, Musk sued both of those organizations. This now means that whenever anyone else reports on such things, it’s worth calling it out, because the clear intent of Musk suing CCDH and MMfA was to scare off more reporting.

That said, suing small non-profits with limited resources is one thing, but taking on NBC (where ExTwitter’s “official” CEO used to work) is another. NBC had called out similar failings months ago and ExTwitter didn’t sue then. So, either Musk is learning, or someone at the company realizes NBC might be tougher to sue.

Some of this style of reporting is a bit silly and show-offy, but if Elon promises no such ads and sues those who point out it’s still happening, no one should be surprised that more reporters call this out and highlight Musk’s failures.

  • ✇Techdirt
  • New Jersey Governor Signs Bill That Will Make It Much More Difficult To Obtain Public RecordsTim Cushing
    Very few governments and government agencies value the transparency and accountability that robust open records laws create. It took an act of Congress to even establish a presumptive right of access to government records. And all across the United States, state governments are always trying to find some way to limit access without getting hit with an injunction from courts that seem far more respectful of this right than the governments and agencies obliged to conform with statutory requirement
     

New Jersey Governor Signs Bill That Will Make It Much More Difficult To Obtain Public Records

10. Červen 2024 v 19:55

Very few governments and government agencies value the transparency and accountability that robust open records laws create. It took an act of Congress to even establish a presumptive right of access to government records. And all across the United States, state governments are always trying to find some way to limit access without getting hit with an injunction from courts that seem far more respectful of this right than the governments and agencies obliged to conform with statutory requirements.

Not for nothing is it pretty much de rigueur to engage in litigation to obtain records from entities legally required to hand them over. New Jersey is the latest state to help itself to more opacity while placing more obligations on the public — you know, the people who pay their salaries. While there have been a few moves towards the positive side of this equation over the past decade, legislators and Governor Phil Murphy have decided the public only deserves to know what the government feels like telling it.

As Matt Friedman reports for Politico, the new normal in New Jersey is discouraging people from suing after their records requests have been blown off by state agencies. This isn’t anything state residents want. This is the governor protecting the government from the people it’s supposed to be serving.

The problematic law doesn’t dial back any obligations to respond to requests. Instead, it’s a bit more nefarious. It assumes government entities will fail to comply with their statutory obligations, but passes that cost on to the people directly by making it far more expensive to force records out of agencies’ hands.

Here’s the impetus:

The push for the bill has largely come from lobbyists for county and local governments, who say records custodians are burdened by commercial and unreasonable requests by a small number of people.

And here’s the outcome:

Most controversially, the legislation would end the current practice of mandatory “fee shifting,” in which governments pay the “reasonable” legal costs for any requester who successfully challenges a records denial in court. It would instead leave it up to a judge, who would only be required to award the legal costs to the plaintiff if they determine the denial was made in “bad faith,” “unreasonably,” or the government agency “knowingly or willfully” violated the law.

That places the burden of litigation almost entirely on records requesters. If they decide to initiate litigation to obtain what the law says the state must turn over, they’re now faced with the possibility of not being able to recover their litigation costs even if a judge rules a government agency must turn over the requested records. All the government needs to demonstrate (and a judge needs to trust its narrative) is that any failure to provide records wasn’t a “knowing” violation of the law. This is the government seeing all the litigation non-compliant agencies generate and somehow arriving at the conclusion that it just must be too easy to sue the government for refusing to uphold its end of the public records bargain.

And that’s not all. The law also grants a presumptive fee burden on requesters, requiring them to demonstrate (to agencies already unwilling to comply with requests) that the requested fees are “unreasonable.” More specificity is also demanded of requesters, which is insane because requesters in some cases can’t possibly know the specifics of the records they’re requesting and will likely only have those specifics if the government agency actually hands over the records.

Bizarrely, it also bars requesters from sharing any photo or video received via a public records request if it contains “any indecent or graphic images of the subject’s intimate part” without getting direct permission from the person captured in the recording or photo. And that makes it pretty easy for the government to bury photos and recordings it doesn’t want to have shared by refusing to redact or blur any footage/photos containing an “intimate part.”

That means things like a violent arrest of person suffering a mental health crisis could be buried just because (as happens frequently in cases like these) the person being violently subdued by cops is underclothed or naked. If nothing else, it passes on the expense of redacting footage to those receiving the recordings, rather than place that obligation on those releasing records that might violate the stipulations of the revised public records law.

The gist of the law — and definitely the gist of the governor’s statement [PDF] in support of his own signature on said law — is that the government is real victim here. It’s being steadily crushed under the litigious heel of requesters who sue when the government violates the law, refuses to hand over records it’s obligated to hand over, or just make what the government considers to be too many records requests.

After a thorough examination of the provisions of the bill, I am persuaded that the changes, viewed comprehensively, are relatively modest.

Hmmm. Except that no one but government entities seeking greater opacity (or at least angling for a lower obligation for responses) is in favor of this law. Anyone actually engaged in transparency and accountability efforts doesn’t see this as “modest” revision of the state’s public records law, much less as a win for the general public. This is the government doing what it does with its greatest enthusiasm: protecting itself from the people it’s supposed to be serving.

  • ✇Techdirt
  • Daily Deal: The Raspberry Pi And Arduino Bootcamp BundleGretchen Heckmann
    The Raspberry Pi and Arduino Bootcamp Bundle has 5 courses to help you dive into the world of hands-on programming. Courses cover Arduino, Raspberry Pi, and ROS2. It’s on sale for $30. Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
     

Daily Deal: The Raspberry Pi And Arduino Bootcamp Bundle

10. Červen 2024 v 19:51

The Raspberry Pi and Arduino Bootcamp Bundle has 5 courses to help you dive into the world of hands-on programming. Courses cover Arduino, Raspberry Pi, and ROS2. It’s on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

  • ✇Techdirt
  • Judge Experiments With ChatGPT, And It’s Not As Crazy As It SoundsMike Masnick
    Would you freak out if you found out a judge was asking ChatGPT a question to help decide a case? Would you think that it was absurd and a problem? Well, one appeals court judge felt the same way… until he started exploring the issue in one of the most thoughtful explorations of LLMs I’ve seen (while also being one of the most amusing concurrences I’ve seen). I recognize that the use of generative AI tools in lots of places raises a lot of controversy, though I think the biggest complaint comes
     

Judge Experiments With ChatGPT, And It’s Not As Crazy As It Sounds

10. Červen 2024 v 18:32

Would you freak out if you found out a judge was asking ChatGPT a question to help decide a case? Would you think that it was absurd and a problem? Well, one appeals court judge felt the same way… until he started exploring the issue in one of the most thoughtful explorations of LLMs I’ve seen (while also being one of the most amusing concurrences I’ve seen).

I recognize that the use of generative AI tools in lots of places raises a lot of controversy, though I think the biggest complaint comes from the ridiculously bad and poorly thought out uses of the technology (usually involving over relying on the tech, when it is not at all reliable).

Back in April, I wrote about how I use LLMs at Techdirt, not to replace anyone or to do any writing, but as a brainstorming tool or a soundboard for ideas. I continue to find it useful in that manner, mainly as an additional tool (beyond my existing editors) to push me to really think through the arguments I’m making and how I’m making them.

So I found it somewhat interesting to see Judge Kevin Newsom, of the 11th Circuit, recently issue a concurrence in a case, solely for the point of explaining how he used generative AI tools in thinking about the case, and how courts might want to think (carefully!) about using the tech in the future.

The case itself isn’t all that interesting. It’s a dispute over whether an insurance provider is required under its agreement to cover a trampoline injury case after the landscaper who installed the trampoline was sued. The lower court and the appeals court both say that the insurance agreement doesn’t cover this particular scenario, and therefore, the insurance company has no duty to defend the landscaper.

But Newsom’s concurrence is about his use of generative AI, which he openly admits may be controversial, and begs for people to consider his entire argument:

I concur in the Court’s judgment and join its opinion in full. I write separately (and I’ll confess this is a little unusual) simply to pull back the curtain on the process by which I thought through one of the issues in this case—and using my own experience here as backdrop, to make a modest proposal regarding courts’ interpretations of the words and phrases used in legal instruments.

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Now let me explain myself.

As Judge Newsom notes, a part of the case involved determining what the common understanding of the term “landscaping” meant, as it was not clearly defined in the contract. He also says that, due to a quirk of Alabama law, the final disposition of the case didn’t actually depend on this definitional issue, in part because of the landscaper’s insurance application, where he denied doing any work on recreational equipment.

But that allows Newsom the chance to explore how AI might be useful here, in a case where it wasn’t necessary. And that allows him to be somewhat more informal than you might expect from a judge (though, of course, we all have our favorite examples of judges letting their hair down a bit in opinions).

Importantly, though, that off-ramp wasn’t always obviously available to us—or at least as I saw things, to me. Accordingly, I spent hours and hours (and hours) laboring over the question whether Snell’s trampoline-installation project qualified as “landscaping” as that term is ordinarily understood. And it was midway along that journey that I had the disconcerting thought that underlies this separate writing: Is it absurd to think that ChatGPT might be able to shed some light on what the term “landscaping” means? Initially, I answered my own question in the affirmative: Yes, Kevin, that is positively absurd. But the longer and more deeply I considered it, the less absurd it seemed.

I kind of appreciate the thoroughness with which he admits that there are good reasons to think he’s absurd here — he even thought it himself! — before explaining how he changed his mind.

He admits that he did “the usual” thing when courts try to determine the ordinary meaning of a word, which often involves… looking up what the dictionary or other such reference materials say. So he did a run-through of dictionaries and looked at their definitions of “landscaping.” But he noted that it didn’t really help all that much in determining if the trampoline was landscaping.

Then, he also looked at the pictures associated with the case:

After languishing in definitional purgatory for a while, I decided to look at the case from a different perspective—and I do mean look. The record contains a series of photographs of Snell’s trampoline-related project. Here’s one, which shows his prep work—in particular, the empty sand pit and the below-ground retaining wall that reinforced its borders:

Image

And another, which depicts the finished product, including both the polypropylene mat (the fun part) and the decorative wooden “cap”:

Image

I’m not particularly proud of it, but I’ll confess that the photos affected the way I thought about the case. Nothing in them really struck me as particularly “landscaping”-y. The problem, of course, wasthat I couldn’t articulate why. And visceral, gut-instinct decisionmaking has always given me the willies—I definitely didn’t want to be that guy. So in a way, I felt like I was back to square one.

I swear, this is the “bloggiest” Appeals Court concurrence I’ve ever read. And it only gets more bloggy:

And that’s when things got weird. Perhaps in a fit of frustration, and most definitely on what can only be described as a lark, I said to one of my clerks, “I wonder what ChatGPT thinks about all this.” So he ran a query: “What is the ordinary meaning of ‘landscaping’?” Here’s what ChatGPT said in response:

“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.

Interesting, for two reasons. First, ChatGPT’s explanation seemed more sensible than I had thought it might—and definitely less nutty than I had feared. Second, it squared with my own impression— informed by my own experience writing, reading, speaking, and listening to American English in the real world—that ordinary people might well use the word “landscaping” (1) to include more than just botanical and other natural improvements and (2) to cover both aesthetic and functional objectives. In fact, several of the examples that ChatGPT flagged—“paths, fences, [and] water features”—jibed with the sorts of things that had sprung to mind when I first started thinking about the case.

But, don’t worry. He didn’t just go with that because it confirmed his priors. He (rightly) recognized that’s not how any of this should work. Again, this reads like a blog post, not a judicial concurrence, but that’s what makes it fun.

Suffice it to say, my interest was piqued. But I definitely didn’t want to fall into the trap of embracing ChatGPT’s definition just because it aligned with my priors. (Bad.) So, in what might have been a mistake—more on that later—we went ahead and asked it the ultimate question: “Is installing an in-ground trampoline ‘landscaping’”? ChatGPT responded as follows:

Yes, installing an in-ground trampoline can be considered a part of landscaping. Landscaping involves altering the visible features of an outdoor area for aesthetic or practical purposes, and adding an in-ground trampoline would modify the appearance and function of the space. It’s a deliberate change to the outdoor environment, often aimed at enhancing the overall landscape and usability of the area.

For good measure, I posed the same questions to Google’s Bard (since replaced by Gemini). The precise details aren’t particularly important, but the upshot is that both models’ answers indicated that the trampoline-related work Snell had performed—the excavation of the pit, the construction of the retaining wall, the installation of the mat, and the addition of the decorative wooden cap—just might be landscaping.

Apparently, it was around this point that he realized the aforementioned “off-ramp” made by Alabama law, such that this didn’t matter. But he was intrigued that his experiments here had moved him out of the “that’s absurd” category into the “huh, this might be useful… somehow?”

So, he then uses more of the concurrence to explore the pros and cons. I won’t repost all of it, but the strongest argument in favor of considering this is that if the goal is to understand the “common” way in which a word or phrase is used, LLMs trained on the grand corpus of human knowledge might actually provide a better take on the common usage and understanding of such words and phrases.

The ordinary-meaning rule’s foundation in the common speech of common people matters here because LLMs are quite literally “taught” using data that aim to reflect and capture how individuals use language in their everyday lives. Specifically, the models train on a mind-bogglingly enormous amount of raw data taken from the internet—GPT-3.5 Turbo, for example, trained on between 400 and 500 billion words—and at least as I understand LLM design, those data run the gamut from the highest-minded to the lowest, from Hemmingway novels and Ph.D. dissertations to gossip rags and comment threads. Because they cast their nets so widely, LLMs can provide useful statistical predictions about how, in the main, ordinary people ordinarily use words and phrases in ordinary life. So, for instance, and as relevant here, LLMs can be expected to offer meaningful insight into the ordinary meaning of the term “landscaping” because the internet data on which they train contain so many uses of that term, from so many different sources—e.g., professional webpages, DIY sites, news stories, advertisements, government records, blog posts, and general online chatter about the topic.

He’s quick to admit that there are potential problems with this. There are questions about what LLMs trained on, how representative they might be. There might also be other questions about usage changes over time, for example. There are plenty of reasons why these results shouldn’t be automatically relied on.

But as I noted in my own explanation of how I’m using LLMs, the key point is to use them as a way to help you think through issues, not to rely on them as some sort of godlike answer machine. And Judge Newsom seems to recognize that. At the very least, it’s possible that an LLM might give you better (or, at the very least, different) insight into “common usage” of a word or phrase than a dictionary editor.

So far as I can tell, researchers powering the AI revolution have created, and are continuing to develop, increasingly sophisticated ways to convert language (and I’m not making this up) into math that computers can “understand.”… The combination of the massive datasets used for training and this cutting-edge “mathematization” of language enables LLMs to absorb and assess the use of terminology in context and empowers them to detect language patterns at a granular level. So, for instance, modern LLMs can easily discern the difference—and distinguish—between the flying-mammal “bat” that uses echolocation and may or may not be living in your attic, on the one hand, and the wooden “bat” that Shohei Otani uses to hit dingers, on the other. See id. And that, as I understand it, is just the tip of the iceberg. LLM predictions about how we use words and phrases have gotten so sophisticated that they can (for better or worse) produce full-blown conversations, write essays and computer code, draft emails to co-workers, etc. And as anyone who has used them can attest, modern LLMs’ results are often sensible—so sensible, in fact, that they can border on the creepy. Now let’s be clear, LLMs aren’t perfect—and again, we’ll discuss their shortcomings in due course. But let’s be equally clear about what they are: high-octane language-prediction machines capable of probabilistically mapping, among other things, how ordinary people use words and phrases in context.

And, he points out, dictionaries may be very good at proffering definitions, but they are still influenced by the team that puts together that dictionary:

First, although we tend to take dictionaries for granted, as if delivered by a prophet, the precise details of their construction aren’t always self-evident. Who exactly compiles them, and by what criteria do the compilers choose and order the definitions within any given entry? To be sure, we’re not totally in the dark; the online version of Merriam-Webster’s, for instance, provides a useful primer explaining “[h]ow . . . a word get[s] into” that dictionary. It describes a process by which human editors spend a couple of hours a day “reading a cross section of published material” and looking for new words, usages, and spellings, which they then mark for inclusion (along with surrounding context) in a “searchable text database” that totals “more than 70 million words drawn from a great variety of sources”—followed, as I understand things, by a step in which a “definer” consults the available evidence and exercises his or her judgment to “decide[] . . . the best course of action by reading through the citations and using the evidence in them to adjust entries or create new ones.”

Such explainers aside, Justice Scalia and Bryan Garner famously warned against “an uncritical approach to dictionaries.” Antonin Scalia & Bryan A. Garner, A Note on the Use of Dictionaries, 16 Green Bag 2d 419, 420 (2013). They highlighted as risks, for instance, that a volume could “have been hastily put together by two editors on short notice, and very much on the cheap,” and that without “consult[ing] the prefatory material” one might not be able “to understand the principles on which the dictionary [was] assembled” or the “ordering of [the] senses” of a particular term.

Judge Newsom wants you to know that he is not trying to slag the dictionaries here (nor to overly praise LLMs). He’s just pointing out some realities about both:

To be clear, I’m neither a nihilist nor a conspiracy theorist, but I do think that we textualists need to acknowledge (and guard against the fact) that dictionary definitions present a few known unknowns…. And while I certainly appreciate that we also lack perfect knowledge about the training data used by cuttingedge LLMs, many of which are proprietary in nature, see supra notes 6 & 8, I think it’s fair to say that we do know both (1) what LLMs are learning from—namely, tons and tons of internet data— and (2) one of the things that makes LLMs so useful—namely, their ability to accurately predict how normal people use language in their everyday lives.

[….]

Anyway, I don’t mean to paint either too grim a picture of our current, dictionary-centric practice—my own opinions are chock full of dictionary definitions, I hope to good effect—or too rosy a picture of the LLMs’ potentiality. My point is simply that I don’t think using LLMs entails any more opacity or involves any more discretion than is already inherent in interpretive practices that we currently take for granted—and in fact, that on both scores it might actually involve less.

And, of course, he has another long section on all the reasons to remain worried about LLMs in this context. He’s not a blind optimist, and he’s not one of those lawyers we’ve written about too often who just ChatGPT’d their way to useful and totally fake citations. He knows they hallucinate. But, he points, if “hallucinating” is misrepresenting things, lawyers already do that themselves:

LLMs can “hallucinate.” First, the elephant in the room: What about LLMs’ now-infamous “hallucinations”? Put simply, an LLM “hallucinates” when, in response to a user’s query, it generates facts that, well, just aren’t true—or at least not quite true. See, e.g., Arbel & Hoffman, supra, at 48–50. Remember the lawyer who got caught using ChatGPT to draft a brief when it ad-libbed case citations—which is to say cited precedents that didn’t exist? See, e.g., Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, N.Y. Times (May 29, 2023). To me, this is among the most serious objections to using LLMs in the search for ordinary meaning. Even so, I don’t think it’s a conversationstopper. For one thing, LLM technology is improving at breakneck speed, and there’s every reason to believe that hallucinations will become fewer and farther between. Moreover, hallucinations would seem to be most worrisome when asking a specific question that has a specific answer—less so, it seems to me, when more generally seeking the “ordinary meaning” of some word or phrase. Finally, let’s shoot straight: Flesh-and-blood lawyers hallucinate too. Sometimes, their hallucinations are good-faith mistakes. But all too often, I’m afraid, they’re quite intentional—in their zeal, attorneys sometimes shade facts, finesse (and even omit altogether) adverse authorities, etc. So at worst, the “hallucination” problem counsels against blind-faith reliance on LLM outputs—in exactly the same way that no conscientious judge would blind-faith rely on a lawyer’s representations.

He also goes deep on some other downsides, including some we already discussed regarding what data the LLMs are trained on. If it’s only online speech, does that leave out speech that is common offline? Does it leave out communities who have less access to the internet? Basically, it’s part of the well-known “alignment problem” in generative AI, around the inevitability of some level of bias that is simply unavoidable. But that doesn’t mean you just shrug and accept things unquestioned.

He even considers that lawyers might try to shop around for different AIs that agree with them the most or, worse, try to “poison” an LLM to get it to agree with a preferred understanding. But, he notes, that seems unlikely to be all that effective.

There’s also this fun bit about the dystopian threat of “robo lawyers,” which I especially appreciate given that we once created a game, called HAL of Justice, for a legal academic conference that involved turning everyone involved into futuristic AI judges handling court cases.

Would the consideration of LLM outputs in interpreting legal texts inevitably put us on some dystopian path toward “robo judges” algorithmically resolving human disputes? I don’t think so. As Chief Justice Roberts recently observed, the law will always require “gray area[]” decisionmaking that entails the “application of human judgment.” Chief Justice John G. Roberts, Jr., 2023 Year-End Report on the Federal Judiciary 6 (Dec. 31, 2023). And I hope it’s clear by this point that I am not—not, not, not—suggesting that any judge should ever query an LLM concerning the ordinary meaning of some word (say, “landscaping”) and then mechanistically apply it to her facts and render judgment. My only proposal—and, again, I think it’s a pretty modest one—is that we consider whether LLMs might provide additional datapoints to be used alongside dictionaries, canons, and syntactical context in the assessment of terms’ ordinary meaning. That’s all; that’s it.

And with that, he closes with an interesting provocation. If you’ve come around to his idea that we should be considering this form of algorithmically-assisted brainstorming, what are the key things we should think about? He highlights that prompt construction will matter a lot. How do you create the “right” prompt? Should you try multiple prompts? Should you use multiple LLMs? Should there be some indication of how “confident” an LLM is in any particular answer? And, as noted earlier, how do you handle issues of words having meanings change over time, if the standard should be at the relevant time of the contract.

And he closes in the most blog-like fashion imaginable.

Just my two cents.

I find this whole discussion fascinating. As I highlighted in my own post about how we use LLMs for brainstorming, I recognize that some people hate the idea outright, while others are too utopian about “AI in everything” without thinking through the potential downsides. It’s nice for some to recognize that there is a reasonable middle path: that they have utility in certain, specific scenarios, if used properly, and not relied on as a final arbiter of anything.

Also, it’s just kind of fun to read through this quite thoughtful exploration of the topic and how Judge Newsom is considering these issues (fwiw, Newsom has been the author of opinions we’ve agreed with strongly, as well as ones we’ve disagreed with strongly, so it’s not as though I feel one way or the other about this based on his jurisprudence — it’s just a really interesting discussion).

I also appreciate that, unlike so many conversations on tech like generative AI these days, he’s not taking the extremist approach of it being “all good” or “all bad,” and is actually willing to explore the tradeoffs, nuances, and open questions related to the issues. It would be nice if the world saw more of that, just in general.

  • ✇Techdirt
  • Former Politico Owner Launches New Journalism Finishing School To Try And Fix All The ‘Wokeness’Karl Bode
    I’ve noted more than a few times that the primary problem with U.S. journalism is the fact that most major media outlets are owned by out of touch billionaire brunchlords who genuinely don’t understand the modern media environment, can’t see their own gender, race, or class biases, and often have absolutely no earthly fucking idea what they’re actually doing. You can see this very clearly at places like Politico, where political coverage often takes a feckless “both sides” approach to factual r
     

Former Politico Owner Launches New Journalism Finishing School To Try And Fix All The ‘Wokeness’

Od: Karl Bode
10. Červen 2024 v 14:25

I’ve noted more than a few times that the primary problem with U.S. journalism is the fact that most major media outlets are owned by out of touch billionaire brunchlords who genuinely don’t understand the modern media environment, can’t see their own gender, race, or class biases, and often have absolutely no earthly fucking idea what they’re actually doing.

You can see this very clearly at places like Politico, where political coverage often takes a feckless “both sides” approach to factual reality. This “view from nowhere,” as NYU Journalism professor Jay Rosen dubbed it, presents everything from fascist insurrection to climate changes as a perfect symmetrical debate between two valid sides.

It’s a timidity and aversion to the actual truth, born from a fear of upsetting sources, advertisers, event sponsors, and those in power. And, because this kind of journalism is incapable of clearly calling out fascism and bigotry, it’s being broadly abused and exploited by authoritarians.

Current Politico owner Axel Springer CEO Mathias Döpfner has repeatedly demonstrated that he has no idea this broken media paradigm even exists, much less any inkling on how to fix it. The same appears to go for former Politico owner Robert Allbritton, who recently started a new journalism finishing school profiled last week in the Washington Post.

The underlying organization is called the Allbritton Journalism Institute. AJI in turn runs what they’re calling a “teaching hospital for journalism” dubbed NOTUS, or News of the United States. Journalists I know tell me NOTUS is doing some good work, though WAPO kind of buries a claim that the nonprofit may have been launched as a way for Allbritton to get back into media without violating his noncompete.

But the Washington Post report on the project raises a few red flags, beginning with a claim by Allbritton that he created the project because there apparently just aren’t any good journalists out there to hire:

“Allbritton believes there’s a dearth of good reporter candidates out there, and a need for the real-world training once provided by places like Politico, which he sold in 2021.

“Talking to a ton of senior-to-mid-level execs in media, the constant refrain is: ‘I can’t find great people,’” he said in an interview. “It’s really hard to hire good people.”

There’s actually an over-abundance of quality journalists out there thanks to record layoffs caused, in large part, due to rampant misspending and incompetence among the brunchlord set (see the Vice and The Messenger shitshows as just the latest examples). I’ve lost track of the number of phenomenally talented editors and journalists I know who have been shitcanned in recent years.

Later in the article, Allbritton starts to explain in detail precisely what he thinks is wrong with most modern journalists, and while he couches his language a bit, it starts to become pretty clear that his primary concern is all the damn wokeness:

“There was definitely a kind of a woke kind of shift that took place within newsrooms,” he said. “I wouldn’t say it’s radical. It’s not. But there’s some social-warrior believers in there. I’m not sure they use their voices, but definitely believers. And there’s nothing wrong with that. It’s good to have an opinion. But it does make it a little harder to get to the truth if you’re coming in there with either a liberal bias or a conservative bias.”

There’s some nice tap dancing there, but the use of “woke” and “social-warrior believers” (SJWs) as pejoratives indicates that the real thing Allbritton doesn’t like about modern journalism is all the damn class, gender, and race awareness. Albritton claims he wants a journalism finishing school that reflects “a variety of political ideologies,” and yet here’s the kind of folks WAPO says are getting fellowships:

“This year’s cohort of fellows is diverse and includes students from backgrounds that are underrepresented in the news industry, including a fellow who attended a Christian college and one who served as an artillery officer in the U.S. Army for seven years.”

Yes Christianity and the military, two woefully underrepresented segments of American society.

So, look, any money being doled into journalism is good. And I’m sure the NOTUS fellows are doing some decent work. I hope they’re being paid a living wage with high-end luxuries like health care and time off, and I also hope that this entire effort isn’t shuttered in six months after it’s revealed that Allbritton blew the entire budget on outsized compensation and lavish brand parties a la Sports Illustrated.

Here’s the thing that annoys me.

There’s an ocean of problems with journalism, but the idea that there’s just too damn much woke progressivism is utter delusion. U.S. journalism generally tilts center right on the political spectrum. It’s generally corporatist and pro-business to a comical degree. Often it’s too feckless to meaningfully critique wealth and power. Routinely it traffics in engagement-chasing distraction, not knowledge.

The tech press, in particular, operates basically as an extension of tech company marketing departments. There certainly is occasionally good journalism (ProPublica’s exploration of the Supreme Court, Reuters’ investigations into Tesla), but by and large the folks in charge of major U.S. media institutions are building a badly automated ad-engagement ouroboros for which the truth is a pretty distant fucking afterthought.

This all directly reflects the bias and interests of an out of touch billionaire extraction class that’s either blind to its own biases, or desperate to pretend they don’t exist. In the wake of Black Lives Matter and COVID there was some fleeting recommendations to the ivy league establishment media that we could perhaps take a slightly more well-rounded, inclusive approach to journalism.

In response, the trust fund lords in charge of these establishment outlets lost their fucking minds, started crying incessantly about young journalists “needing safe spaces,” and decided to double down on all their worst impulses, having learned less than nothing along the way. Reading Allbritton’s lamentations about the wokes you don’t really get the sense he learned much of anything from the voyage either.

There’s a reason U.S. journalism is falling apart. There’s a reason journalists are increasingly crafting their own newsletters or building smaller, journalist-owned news outlets. And (unless you’re a Matt Taibbi type looking to exploit the modern right’s bottomless victimization complex) it has very little to do with diabolical wokeism, or the handful of progressive voices begging for journalism that doesn’t suck.

  • ✇Techdirt
  • Funniest/Most Insightful Comments Of The Week At TechdirtLeigh Beadon
    This week, our first place winner on the insightful side is Stephen T. Stone with a comment about Trump threatening ProPublica, and our point that he remains “exhibit A” for why anti-SLAPP laws are needed: The funny thing is, this statement could apply to a lot of situations: SLAPPs, campaign finance fraud, attempting to overthrow the government… In second place, it’s MrWilson with some thoughts on banning kids from social media: Banning kids from social media is like home schooling kids. Sur
     

Funniest/Most Insightful Comments Of The Week At Techdirt

9. Červen 2024 v 21:00

This week, our first place winner on the insightful side is Stephen T. Stone with a comment about Trump threatening ProPublica, and our point that he remains “exhibit A” for why anti-SLAPP laws are needed:

The funny thing is, this statement could apply to a lot of situations: SLAPPs, campaign finance fraud, attempting to overthrow the government…

In second place, it’s MrWilson with some thoughts on banning kids from social media:

Banning kids from social media is like home schooling kids. Sure, it’ll limit their access to some of the genuinely awful stuff out there, but it will also prevent them from seeing all the good stuff out there, developing positive social experiences and relationships outside of their bubble, and it will mean their parents and siblings and neighbors are their only social outlet or influence, which can be pretty shitty if your parents are authoritarian fundamentalists who think children are somewhere between prisoners or slaves.

Once again, the bad shit in social media is due to the nature of people, not to the nature of social media. And you can’t legislate people to not be shitty people.

For editor’s choice on the insightful side, we start out with one more comment about Trump’s threats, this time from an anonymous commenter reminding everyone that we knew it was coming:

“One of the things I’m going to do if I win, and I hope we do and we’re certainly leading. I’m going to open up our libel laws so when they write purposely negative and horrible and false articles, we can sue them and win lots of money. We’re going to open up those libel laws.”

Even back in 2016 he was broadcasting his plans to not let anyone tell the truth about him.

Next, it’s PaulT with a comment about the EU court saying there’s no right to online anonymity and equating things like downloading with serious crimes:

Sadly, we’ve been arguing this for a long time. A copied file does not necessarily mean a lost sale, and in the spirit of mixtapes and movies recorded from TV could actually mean increased sales from some people.

The best option is giving people access to what they wish to access, at a reasonable price level. But, between confusing licencing agreements, regional restrictions and regular price hikes, I’m hearing more and more people just saying “f**k it, I’m pirating again”. When Netflix first came to Europe and when Spotify started, my selling point for people was saying that it was easier than piracy. I can definitely say that removing peoples’ privacy will not cause them to do anything other than work out how to use security features, and that peoples’ lives will be ruined unnecessarily by concentrating on people who copy files vs. people who refuse to give people what they will pay for.

“Piracy” is a good excuse, but it’s usually a cover for other things. That was true in the days of mixtapes, VHS tape trading and floppy disk trades, and it’s true now.

Over on the funny side, our first place winner is an anonymous reply to Stephen Stone’s first place insightful comment:

Trump’s next job

Interviewer: Are there any accomplishments from your last job that you’re particularly proud of?

Candidate: I’m responsible for ten new rules in their employee handbook.

Interviewer: That’s great! You wrote them?

Candidate: That’s not what I said.

In second place, it’s Section_230 with a response to a commenter spouting something weird about Nazis:

Speed running Godwin’s law are we?

For editor’s choice on the funny side, we’ve got a pair of comments about Trump campaigning on TikTok. First, it’s Pixelation with a change of heart on the reasons for banning the app:

With Trump using it, TikTok is definitely a national security threat.

Finally, it’s an anonymous prognostication:

Some Gen Alpha kid is gonna give him a stroke and we’re gonna owe them for the rest of our lives.

That’s all for this week, folks!

  • ✇Techdirt
  • This Week In Techdirt History: June 2nd – 8thLeigh Beadon
    Five Years Ago This week in 2019, the FCC was remaining in denial about the lack of broadband competition, while we asked why all the antitrust attention was focused on Big Tech but not Big Telecom. Officials in Germany were pushing for encryption backdoors while Facebook was considering going ahead and undermining its own encryption regardless, and the EU Court of Justice was suggesting that maybe the entire internet should be censored and filtered. The targets of Devin Nunes’s cow lawsuits wer
     

This Week In Techdirt History: June 2nd – 8th

8. Červen 2024 v 21:57

Five Years Ago

This week in 2019, the FCC was remaining in denial about the lack of broadband competition, while we asked why all the antitrust attention was focused on Big Tech but not Big Telecom. Officials in Germany were pushing for encryption backdoors while Facebook was considering going ahead and undermining its own encryption regardless, and the EU Court of Justice was suggesting that maybe the entire internet should be censored and filtered. The targets of Devin Nunes’s cow lawsuits were fighting back, and some drama at YouTube once again demonstrated the impossibility of content moderation.

Ten Years Ago

This week in 2014, a failed patent troll was hit with legal fees and the Supreme Court issued two more smackdowns of the CAFC, while Malibu Media was trying to get more ammo against its targets and the EU Court of Justice ruled that just viewing stuff online isn’t copyright infringement. The EFF argued in court that the NSA knowingly and illegally destroyed evidence, while the UK government was trying (and failing) to hide details of GCHQ fiber line taps, while courts in both countries were holding secret trials related to terrorism. Also, we hit the one year anniversary of the very first Snowden revelation, and noted that while much had changed since then, it wasn’t enough.

Fifteen Years Ago

This week in 2009, the Supreme Court agreed to take on the Bilski case about whether software and business models could be patented. The RIAA’s voluntary program for ISPs was not exactly a hit, ASCAP was looking to get some of that sweet video game money, and JD Salinger infamously sued the author of an unauthorized sequel to Catcher in the Rye. Apple proved the EFF’s point about arbitrary app store rejections by rejecting the EFF’s RSS reader, and Creative Commons was still facing some problems due to the blurry line between commercial and non-commercial. Also, Barbara Streisand decided to publish an entire book about the Malibu home that she once rather famously wanted to keep secret.

  • ✇Techdirt
  • Oral-B Takes ‘Alexa’ Feature Away From Its Toothbrush Base 4 Years After Selling ThemDark Helmet
    Here we are again, with yet another in our series of posts describing how in these here modern times you simply don’t actually own the things you’ve bought. This sort of thing takes many forms, of course. Sometimes the digital media you “bought” gets disappeared by a platform after a licensing deal runs out. Sometimes the hardware you bought turns into a relatively expensive brick because the company you bought it from decides to stop supporting those devices entirely. And, as Sony made famous w
     

Oral-B Takes ‘Alexa’ Feature Away From Its Toothbrush Base 4 Years After Selling Them

8. Červen 2024 v 04:39

Here we are again, with yet another in our series of posts describing how in these here modern times you simply don’t actually own the things you’ve bought. This sort of thing takes many forms, of course. Sometimes the digital media you “bought” gets disappeared by a platform after a licensing deal runs out. Sometimes the hardware you bought turns into a relatively expensive brick because the company you bought it from decides to stop supporting those devices entirely. And, as Sony made famous with its PlayStation 3, sometimes a company simply decides to disappear a feature that was a selling point on a product on a whim.

Well, that last and oldest example appears to be the most analogous to what Oral-B just did to customers of some of its toothbrushes, which came with a charging base that you could connect to an Amazon Alexa.

That’s what’s happening to some who bought into Oral-B toothbrushes with Amazon Alexa built in. Oral-B released the Guide for $230 in August 2020 but bricked the ability to set up or reconfigure Alexa on the product this February. As of this writing, the Guide is still available through a third-party Amazon seller.

The Guide toothbrush’s charging base was able to connect to the Internet and work like an Alexa speaker that you could speak to and from which Alexa could respond. Owners could “ask to play music, hear the news, check weather, control smart home devices, and even order more brush heads by saying, ‘Alexa, order Oral-B brush head replacements,’” per Procter & Gamble’s 2020 announcement.

And then, in February of this year, Oral-B simply took that feature away. Where there once was an app that you could use to connect the Guide base to your Alexa, that feature in the app is no longer available. For those that had it previously setup with their Alexa, the base will work right up until the point that it drops its internet connection, after which it will no longer connect.

And if you thought refunds would be a thing here, it appears that’s not the case.

That’s a problem for Patrick Hubley, who learned that Oral-B discontinued Connect when his base inadvertently disconnected from the Wi-Fi and he tried using Connect to fix it. He told Ars Technica that when he tries using the Alexa wake word now, the speaker says, “I’m having trouble connecting to the Internet. For help, go to your device’s companion app.”

Hubley attempted but failed to get a refund or replacement brush through Oral-B’s support avenues. He says he will no longer buy Oral-B or Alexa products.

“I only purchased this toothbrush from Amazon because that was the only way to get the water-resistant Alexa speaker that I wanted for the bathroom. … I’m ready to be done with Alexa and Oral-B both.”

This is all starting to sound like the Spotify Car Thing story I linked to in the opener. If history is a guide, perhaps a good bout of public outrage from buyers of the Guide will spur Oral-B to reconsider offering refunds for a product it retroactively decided to make less useful after purchase.

But either way, there really should be some sort of consumer rights associated with not having a product that is purchased suddenly lose features long after purchase. In the meantime, I’ll just have to go back to singing in the shower, I suppose.

❌
❌