FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál

Why Libertarians Hate Kamala Harris' Economic Platform

Kamala Harris and Katherine Mangu Ward | Lex Villena; Josh Brown/ZUMAPRESS/Newscom

In this week's The Reason Roundtable, editors Peter Suderman, Katherine Mangu-Ward, and Nick Gillespie welcome special guest Ben Dreyfuss onto the pod ahead of this week's Democratic National Convention in Chicago to talk about Kamala Harris' truly terrible economic policy proposals.

02:48—Dreyfuss' YIMBY conversion thanks to Reason

13:20—Harris drops some lousy economic policy ideas.

32:37—The DNC begins.

44:25—Weekly Listener Question

53:33—Tariffs are timeless.

1:03:32—This week's cultural recommendations

Mentioned in this podcast:

"Kamala Harris' Dishonest and Stupid Price Control Proposal," by J.D. Tuccille

"DNC Readies for Protesters," by Liz Wolfe

"Harris' Economic Illiteracy," by Liz Wolfe

"Harris Joins the FTC's Food Fight Against Kroger-Albertsons Merger," by C. Jarrett Dieterle

"The times demand serious economic ideas. Harris supplies gimmicks." by the Washington Post editorial board

The price tag of @KamalaHarris's big, bold economic plan? According to penny pinchers at @BudgetHawks, a mere $1.7 to $2 trillion over the next decade. Given that gross debt is $35 trillion, maybe it's time to tap the brakes a bit?https://t.co/qA5wFJleLw pic.twitter.com/q80MwxoRD9

— Nick Gillespie (@nickgillespie) August 16, 2024

"When your opponent calls you 'communist,' maybe don't propose price controls?" Catherine Rampell

"How did Doug Emhoff hear Biden was out? After taking a SoulCycle class in WeHo. Without his phone," by Kevin Rector

"Database Nation: The Upside of 'Zero Privacy,'" by Declan Mccullagh

"Alien: Romulus Is a Slick, Empty Franchise Pastiche," by Peter Suderman

The Calm Down Substack by Ben Dreyfuss

https://x.com/nickgillespie/status/1824430467191312525

"Sing for Change Obama"

Upcoming Events:

Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.

Today's sponsors:

  • Lumen is the world's first handheld metabolic coach. It's a device that measures your metabolism through your breath. On the app, it lets you know if you're burning fat or carbs, and it gives you tailored guidance to improve your nutrition, workouts, sleep, and even stress management. All you have to do is breathe into your Lumen first thing in the morning, and you'll know what's going on with your metabolism, whether you're burning mostly fats or carbs. Then, Lumen gives you a personalized nutrition plan for that day based on your measurements. You can also breathe into it before and after workouts and meals, so you know exactly what's going on in your body in real time, and Lumen will give you tips to keep you on top of your health game. Your metabolism is your body's engine—it's how your body turns the food you eat into fuel that keeps you going. Because your metabolism is at the center of everything your body does, optimal metabolic health translates to a bunch of benefits, including easier weight management, improved energy levels, better fitness results, better sleep, etc. Lumen gives you recommendations to improve your metabolic health. It can also track your cycle as well as the onset of menopause, and adjust your recommendations to keep your metabolism healthy through hormonal shifts, so you can keep up your energy and stave off cravings. So, if you want to take the next step in improving your health, go to lumen.me/ROUNDTABLE to get 15 percent off your Lumen.
  • Qualia Senolytic: Have you heard about senolytics yet? It's a class of ingredients discovered less than 10 years ago, and it's being called the biggest discovery of our time for promoting healthy aging and enhancing your physical prime. Your goals in your career and beyond require productivity. But let's be honest: The aging process is not our friend when it comes to endless energy and productivity. As we age, everyone accumulates "senescent" cells in their body. Senescent cells cause symptoms of aging, such as aches and discomfort, slow workout recoveries, and sluggish mental and physical energy associated with that "middle age" feeling. Also known as "Zombie Cells," they are old and worn out and not serving a useful function for our health anymore, but they are taking up space and nutrients from our healthy cells. Much like pruning the yellowing and dead leaves off a plant, Qualia Senolytic removes those worn-out senescent cells to allow for the rest of your cells to thrive in the body. Take it just two days a month. The formula is non-GMO, vegan, and gluten-free, and the ingredients are meant to complement one another, factoring in the combined effect of all ingredients together. Resist aging at the cellular level and try Qualia Senolytic. Go to Qualialife.com/ROUNDTABLE for up to 50 percent off and use code ROUNDTABLE at checkout for an additional 15 percent off. For your convenience Qualia Senolytic is also available at select GNC locations near you.

Audio production by Ian Keyser; assistant production by Hunt Beaty.

Music: "Angeline," by The Brothers Steve

The post Why Libertarians Hate Kamala Harris' Economic Platform appeared first on Reason.com.

💾

  • ✇Latest
  • Partisan Border WarsMatt Welch, Katherine Mangu-Ward, Nick Gillespie, Peter Suderman
    In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Nick Gillespie, and Peter Suderman scrutinize President Joe Biden's executive order updating asylum restrictions at the U.S.-Mexico border in response to illegal border crossings. 01:32—Biden's new asylum restrictions 21:38—The prosecution of political opponents: former President Donald Trump, Hunter Biden, and Steve Bannon 33:25—Weekly Listener Question 39:56—No one
     

Partisan Border Wars

Migrants seeking asylum line up at U.S.-Mexico border | Qian Weizhong/VCG/Newscom

In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Nick Gillespie, and Peter Suderman scrutinize President Joe Biden's executive order updating asylum restrictions at the U.S.-Mexico border in response to illegal border crossings.

01:32—Biden's new asylum restrictions

21:38—The prosecution of political opponents: former President Donald Trump, Hunter Biden, and Steve Bannon

33:25—Weekly Listener Question

39:56—No one is reading The Washington Post

48:09—This week's cultural recommendations

Mentioned in this podcast:

"Biden Announces Sweeping Asylum Restrictions at U.S.-Mexico Border" by Fiona Harrigan

"Biden's New Asylum Policy is Both Harmful and Illegal" by Ilya Somin

"Travel Ban, Redux" by Josh Blackman

"Immigration Fueled America's Stunning Cricket Upset Over Pakistan" by Eric Boehm

"Libertarian Candidate Chase Oliver Wants To Bring Back 'Ellis Island Style' Immigration Processing" by Fiona Harrigan

"Donald Trump and Hunter Biden Face the Illogical Consequences of an Arbitrary Gun Law" by Jacob Sullum

"Hunter Biden's Trial Highlights a Widely Flouted, Haphazardly Enforced, and Constitutionally Dubious Gun Law" by Jacob Sullum

"Hunter Biden's Multiplying Charges Exemplify a Profound Threat to Trial by Jury" by Jacob Sullum

"The Conviction Effect" by Liz Wolfe

"Laurence Tribe Bizarrely Claims Trump Won the 2016 Election by Falsifying Business Records in 2017" by Jacob Sullum

"A Jumble of Legal Theories Failed To Give Trump 'Fair Notice' of the New York Charges Against Him" by Jacob Sullum

"Does Donald Trump's Conviction in New York Make Us Banana Republicans?" by J.D. Tuccille

"The Myth of the Federal Private Nondelegation Doctrine, Part 1" by Sasha Volokh

"Federal Court Condemns Congress for Giving Unconstitutional Regulatory Powers to Amtrak" by Damon Root

"Make Amtrak Safer and Privatize It" by Ira Stoll

"Biden Threatens To Veto GOP Spending Bill That Would 'Cut' Amtrak Funding to Double Pre-Pandemic Levels" by Christian Britschgi

"This Company Is Running a High-Speed Train in Florida—Without Subsidies" by Natalie Dowzicky

"Do Not Under Any Circumstances Nationalize Greyhound" by Christian Britschgi

"With Ride or Die, the Bad Boys Movies Become Referendums on Masculinity" by Peter Suderman

"D.C. Water Spent Nearly $4,000 On Its Wendy the Water Drop Mascot" by Christian Britschgi

Upcoming Reason Events:

Reason Speakeasy: Corey DeAngelis on June 11 in New York City

Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.

Today's sponsor:

  • We all carry around different stressors—big and small. When we keep them bottled up, it can start to affect us negatively. Therapy is a safe space to get things off your chest—and to figure out how to work through whatever's weighing you down. If you're thinking of starting therapy, give BetterHelp a try. It's entirely online. Designed to be convenient, flexible, and suited to your schedule. Just fill out a brief questionnaire to get matched with a licensed therapist, and switch therapists any time for no additional charge. Get it off your chest, with BetterHelp. Visit BetterHelp.com/roundtable today to get 10 percent off your first month.

Audio production by Justin Zuckerman and John Carter

Assistant production by Luke Allen and Hunt Beaty

Music: "Angeline" by The Brothers Steve

The post Partisan Border Wars appeared first on Reason.com.

💾

© Qian Weizhong/VCG/Newscom

Migrants seeking asylum line up at U.S.-Mexico border
  • ✇Latest
  • Stephen Wolfram on the Powerful Unpredictability of AIKatherine Mangu-Ward
    Joanna Andreasson/DALL-E4 Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language.
     

Stephen Wolfram on the Powerful Unpredictability of AI

19. Květen 2024 v 12:00
An AI-generated image of | Photo: Julian Dufort/Midjourney
Joanna Andreasson/DALL-E4

Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language. He self-published a 1,200-page work called A New Kind of Science arguing that nature runs on ultrasimple computational rules. The book enjoyed surprising popular acclaim.

Wolfram's work on computational thinking forms the basis of intelligent assistants, such as Siri. In an April conversation with Reason's Katherine Mangu-Ward, he offered a candid assessment of what he hopes and fears from artificial intelligence, and the complicated relationship between humans and their technology.

Reason: Are we too panicked about the rise of AI or are we not panicked enough?

Wolfram: Depends who "we" is. I interact with lots of people and it ranges from people who are convinced that AIs are going to eat us all to people who say AIs are really stupid and won't be able to do anything interesting. It's a pretty broad range.

Throughout human history, the one thing that's progressively changed is the development of technology. And technology is often about automating things that we used to have to do ourselves. I think the great thing technology has done is provide this taller and taller platform of what becomes possible for us to do. And I think the AI moment that we're in right now is one where that platform just got ratcheted up a bit.

You recently wrote an essay asking, "Can AI Solve Science?" What does it mean to solve science?

One of the things that we've come to expect is, science will predict what will happen. So can AI jump ahead and figure out what will happen, or are we stuck with this irreducible computation that has to be done where we can't expect to jump ahead and predict what will happen?

AI, as currently conceived, typically means neural networks that have been trained from data about what humans do. Then the idea is, take those training examples and extrapolate from those in a way that is similar to the way that humans would extrapolate.

Now can you turn that on science and say, "Predict what's going to happen next, just like you can predict what the next word should be in a piece of text"? And the answer is, well, no, not really.

One of the things we've learned from the large language models [LLMs] is that language is easier to predict than we thought. Scientific problems run right into this phenomenon I call computational irreducibility—to know what's going to happen, you have to explicitly run the rules.

Language is something we humans have created and use. Something about the physical world just delivered that to us. It's not something that we humans invented. And it turns out that neural nets work well on things that we humans invented. They don't work very well on things that are just sort of wheeled in from the outside world.

Probably the reason that they work well on things that we humans invented is that their actual structure and operation is similar to the structure and operation of our brains. It's asking a brainlike thing to do brainlike things. So yes, it works, but there's no guarantee that brainlike things can understand the natural world.

That sounds very simple, very straightforward. And that explanation is not going to stop entire disciplines from throwing themselves at that wall for a little while. This feels like it's going to make the crisis in scientific research worse before it gets better. Is that too pessimistic?

It used to be the case that if you saw a big, long document, you knew that effort had to be put into producing it. That suddenly became not the case. They could have just pressed a button and got a machine to generate those words.

So now what does it mean to do a valid piece of academic work? My own view is that what can be most built upon is something that is formalized.

For example, mathematics provides a formalized area where you describe something in precise definitions. It becomes a brick that people can expect to build on.

If you write an academic paper, it's just a bunch of words. Who knows whether there's a brick there that people can build on?

In the past we've had no way to look at some student working through a problem and say, "Hey, here's where you went wrong," except for a human doing that. The LLMs seem to be able to do some of that. That's an interesting inversion of the problem. Yes, you can generate these things with an LLM, but you can also have an LLM understand what was happening.

We are actually trying to build an AI tutor—a system that can do personalized tutoring using LLM. It's a hard problem. The first things you try work for the two-minute demo and then fall over horribly. It's actually quite difficult.

What becomes possible is you can have the [LLM] couch every math problem in terms of the particular thing you are interested in—cooking or gardening or baseball—which is nice. It's a sort of a new level of human interface.

So I think that's a positive piece of what becomes possible. But the key thing to understand is the idea that an essay means somebody committed to write an essay is no longer a thing.

We're going to have to let that go.

Right. I think the thing to realize about AIs for language is that what they provide is kind of a linguistic user interface. A typical use case might be you are trying to write some report for some regulatory filing. You've got five points you want to make, but you need to file a document.

So you make those five points. You feed it to the LLM. The LLM puffs out this whole document. You send it in. The agency that's reading it has their own LLM, and they're asking their LLM, "Find out the two things we want to know from this big regulatory filing." And it condenses it down to that.

So essentially what's happened is you've used natural language as a sort of transport layer that allows you to interface one system to another.

I have this deeply libertarian desire to say, "Could we skip the elaborate regulatory filing, and they could just tell the five things directly to the regulators?"

Well, also it's just convenient that you've got these two systems that are very different trying to talk to each other. Making those things match up is difficult, but if you have this layer of fluffy stuff in the middle, that is our natural language, it's actually easier to get these systems to talk to each other.

I've been pointing out that maybe 400 years ago was sort of a heyday of political philosophy and people inventing ideas about democracy and all those kinds of things. And I think that now there is a need and an opportunity for a repeat of that kind of thinking, because the world has changed.

As we think about AIs that end up having responsibilities in the world, how do we deal with that? I think it's an interesting moment when there should be a bunch of thinking going on about this. There is much less thinking than I think there should be.

An interesting thought experiment is what you might call the promptocracy model of government. One approach is everybody writes a little essay about how they want the world to be, and you feed all those essays into an AI. Then every time you want to make a decision, you just ask the AI based on all these essays that you read from all these people, "What should we do?"

One thing to realize is that in a sense, the operation of government is an attempt to make something like a machine. And in a sense, you put an AI in place rather than the human-operated machine, not sure how different it actually is, but you have these other possibilities.

The robot tutor and the government machine sound like stuff from the Isaac Asimov stories of my youth. That sounds both tempting and so dangerous when you think about how people have a way of bringing their baggage into their technology. Is there a way for us to work around that?

The point to realize is the technology itself has nothing. What we're doing with AI is kind of an amplified version of what we humans have.

The thing to realize is that the raw computational system can do many, many things, most of which we humans do not care about. So as we try and corral it to do things that we care about, we necessarily are pulling it in human directions.

What do you see as the role of competition in resolving some of these concerns? Does the intra-AI competition out there curb any ethical concerns, perhaps in the way that competition in a market might constrain behavior in some ways?

Interesting question. I do think that the society of AIs is more stable than the one AI that rules them all. At a superficial level it prevents certain kinds of totally crazy things from happening, but the reason that there are many LLMs is because once you know ChatGPT is possible, then it becomes not that difficult at some level. You see a lot of both companies and countries stepping up to say, "We'll spend the money. We'll build a thing like this." It's interesting what the improvement curve is going to look like from here. My own guess is that it goes in steps.

How are we going to screw this up? And by "we," I mean maybe people with power, maybe just general human tendencies, and by "this," I mean making productive use of AI.

The first thing to realize is AIs will be suggesting all kinds of things that one might do just as a GPS gives one directions for what one might do. And many people will just follow those suggestions. But one of the features it has is you can't predict everything about what it will do. And sometimes it will do things that aren't things we thought we wanted.

The alternative is to tie it down to the point where it will only do the things we want it to do and it will only do things we can predict it will do. And that will mean it can't do very much.

We arguably do the same thing with human beings already, right? We have lots of rules about what we don't let people do, and sometimes we probably suppress possible innovation on the part of those people.

Yes, that's true. It happens in science. It's a "be careful what you wish for" situation because you say, "I want lots of people to be doing this kind of science because it's really cool and things can be discovered." But as soon as lots of people are doing it, it ends up getting this institutional structure that makes it hard for new things to happen.

Is there a way to short circuit that? Or should we even want to?

I don't know. I've thought about this for basic science for a long time. Individual people can come up with original ideas. By the time it's institutionalized, that's much harder. Having said that: As the infrastructure of the world, which involves huge numbers of people, builds up, you suddenly get to this point where you can see some new creative thing to do, and you couldn't get there if it was just one person beavering away for decades. You need that collective effort to raise the whole platform.

This interview has been condensed and edited for style and clarity.

The post Stephen Wolfram on the Powerful Unpredictability of AI appeared first on Reason.com.

  • ✇Latest
  • Review: Klara and the Sun Tackles AI RegulationKatherine Mangu-Ward
    Joanna Andreasson/DALL-E4 The literal and figurative search for enlightenment by a solar-powered "Artificial Friend" drives the plot of Klara and the Sun, a 2021 novel by Kazuo Ishiguro. Purchased to serve as a companion to a fragile and isolated genetically augmented child, the robot Klara's autonomy and potential are limited by strict constraints on AI. Klara's primitive, spontaneous sun worship and deep loyalty to her charge govern her choices
     

Review: Klara and the Sun Tackles AI Regulation

10. Květen 2024 v 12:00
minis_Klara-and-the-Sun | <em>Klara and the Sun</em>/Knopf
Joanna Andreasson/DALL-E4

The literal and figurative search for enlightenment by a solar-powered "Artificial Friend" drives the plot of Klara and the Sun, a 2021 novel by Kazuo Ishiguro. Purchased to serve as a companion to a fragile and isolated genetically augmented child, the robot Klara's autonomy and potential are limited by strict constraints on AI.

Klara's primitive, spontaneous sun worship and deep loyalty to her charge govern her choices in ways she only barely understands.

Over the course of the novel, it becomes clear that Klara is not alone—her humans are equally hemmed in by state, society, and their own fallibility.

The book's beautiful prose floats effortlessly over heavy questions of free will, epistemology, and faith.

The post Review: <i>Klara and the Sun</i> Tackles AI Regulation appeared first on Reason.com.

  • ✇Latest
  • We Can't Imagine the Future of AIKatherine Mangu-Ward
    Joanna Andreasson/DALL-E4 In the June 2024 issue, we explore the ways that artificial intelligence is shaping our economy and culture. The stories and art are about AI—and occasionally by AI. (Throughout the issue, we have rendered all text generated by AI-powered tools in blue.) To read the rest of the issue, go here. Vernor Vinge was the bard of artificial intelligence, a novelist and mathematician who devoted his career to imagining the nearly
     

We Can't Imagine the Future of AI

2. Květen 2024 v 12:00
Ed Note June 2024 | Illustration: Joanna Andreasson
Joanna Andreasson/DALL-E4

In the June 2024 issue, we explore the ways that artificial intelligence is shaping our economy and culture. The stories and art are about AI—and occasionally by AI. (Throughout the issue, we have rendered all text generated by AI-powered tools in blue.) To read the rest of the issue, go here.

Vernor Vinge was the bard of artificial intelligence, a novelist and mathematician who devoted his career to imagining the nearly unimaginable aftermath of the moment when technology outpaces human capability. He died in March, as we were putting together Reason's first-ever AI issue, right on the cusp of finding out which of his fanciful guesses would turn out to be right.

In 2007, Reason interviewed Vinge about the Singularity—the now slightly out-of-favor term he popularized for that greater-than-human intelligence event horizon. By that time the author of A Fire Upon the Deep and A Deepness in the Sky had, for years, been pinning the date of the Singularity somewhere between 2005 and 2030. To Reason, he offered a softer prediction: If the rapid doubling of processing power known as Moore's law "continues for a decade or two," that "makes it plausible that very interesting A.I. developments might occur before 2030."

That prophecy, at least, has already come true.

Innovation in AI is happening so quickly that the landscape changed dramatically even from the time Reason conceived this issue to the time you are reading it. As a consequence, this particular first draft of history is likely to become rapidly, laughably outdated. (You can read some selections from our archives on the topic.) As we worked on this issue, new large language models (LLMs) and chatbots cropped up every month, image generation went from producing amusing curiosities with the wrong number of fingers to creating stunningly realistic video from text prompts, and the ability to outsource everything from coding tasks to travel bookings went from a hypothetical to a reality. And those were just the free or cheap tools available to amateurs and journalists.

Throughout the issue, we have rendered all text generated by AI-powered tools in blue. Why? Because when we asked ChatGPT to tell us the color of artificial intelligence, that's what it picked:

The color that best encapsulates the idea of artificial intelligence in general is a vibrant shade of blue. Blue is often associated with intelligence, trust, and reliability, making it an ideal color to represent the concept of AI. It also symbolizes the vast potential and endless possibilities that AI brings to the world of technology.

Yet the very notion that any kind of bright line can be drawn between human- and machine-generated content is almost certainly already obsolete.

Reason has a podcast read by a version of my voice that is generated entirely artificially. Our producers use dozens of AI tools to tweak, tidy, and improve our video. A few images generated using AI have appeared in previous issues—though they run rampant in this issue, with captions indicating how they were made. I suspect one of our web developers is just three AIs in a trenchcoat. In this regard, Reason is utterly typical in how fast we have incorporated AI into our daily business.

The best we can offer is a view from our spot, nestled in the crook of an exponential curve. Vinge and others like him long believed themselves to be at such an inflection point. In his 1993 lecture "The Coming Technological Singularity: How To Survive in the Post-Human Era," Vinge said: "When I began writing science fiction in the middle '60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like 18 months." That lead time is now measured in minutes, so he may have been onto something. This issue is an attempt to capture this moment when the possibilities of AI are blooming all around us—and before regulators have had a chance to screw it up.

"Except for their power to blow up the world," Vinge mused in 2007, "I think governments would have a very hard time blocking the Singularity. The possibility of governments perverting the Singularity is somewhat more plausible to me."

They are certainly trying. As Greg Lukianoff of the Foundation for Individual Rights and Expression testified at a February congressional hearing about AI regulation: "Yes, we may have some fears about the proliferation of AI. But what those of us who care about civil liberties fear more is a government monopoly on advanced AI. Or, more likely, regulatory capture and a government-empowered oligopoly that privileges a handful of existing players….Far from reining in the government's misuse of AI to censor, we will have created the framework not only to censor but also to dominate and distort the production of knowledge itself."

Those new pathways for knowledge production and other unexpected outcomes are the most exciting prospects for AI, and the ones Vinge toyed with for decades. What's most interesting is not what AI will do to us, or for us; it's what AI will do that we can barely imagine.

As the physicist and engineer Stephen Wolfram says, "One of the features [AI] has is you can't predict everything about what it will do. And sometimes it will do things that aren't things we thought we wanted. The alternative is to tie it down to the point where it will only do the things we want it to do and it will only do things we can predict it will do. And that will mean it can't do very much."

Even as we worry about the impact of AI on art, sex, education, health care, labor, science, movies, and war, it is Vinge's imaginative, nonjudgmental vision that should inspire us.

"I think that if the Singularity can happen, it will," Vinge told Reason in 2007. "There are lots of very bad things that could happen in this century. The Technological Singularity may be the most likely of the noncatastrophes."


An image generated using the prompt, "Illustration of AI as a doctor, teacher, poet, scientist,
warlord, actor, journalist, artist, and coder." (Illustration: Joanna Andreasson/DALL-E4)

Key AI Terms

By Claude 3 Opus

AI (Artificial Intelligence): The simulation of human intelligence processes by machines, especially computer systems, including learning, reasoning, and self-correction.

Gen AI (Generative AI): A subset of AI that creates new content, such as text, images, audio, and video, based on patterns learned from training data.

Prompt: In the context of AI, a prompt is a piece of text, an image, or other input data provided to an AI system to guide its output or response.

LLM (Large Language Model): A type of AI model trained on vast amounts of text data, capable of understanding and generating human-like text based on the input it receives.

Neural Net (Neural Network): A computing system inspired by the biological neural networks in the human brain, consisting of interconnected nodes that process and transmit information, enabling the system to learn and make decisions.

GPT (Generative Pre-trained Transformer): A type of large language model developed by OpenAI, trained on a diverse range of internet text to generate human-like text, answer questions, and perform various language tasks.

Hallucination: In AI, hallucination refers to an AI system generating output that is not grounded in reality or its training data, often resulting in nonsensical or factually incorrect statements.

Compute: Short for computational resources, such as processing power and memory, required to run AI models and perform complex calculations.

Turing Test: A test proposed by Alan Turing to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human, where a human evaluator engages in a conversation with both a human and a machine and tries to distinguish between them based on their responses.

Machine Learning: A subset of AI that focuses on the development of algorithms and statistical models that enable computer systems to improve their performance on a specific task through experience and data, without being explicitly programmed.

CLAUDE 3 OPUS is a subscription-supported large language model developed by Anthropic, an AI startup. 

The post We Can't Imagine the Future of AI appeared first on Reason.com.

  • ✇Latest
  • We're Rolling Out Reason Plus!Katherine Mangu-Ward
    If you're a paid digital subscriber or a donor (of $50 or more) and you're logged into reason.com you should be seeing a beautiful, user-friendly, ad-free version of our website right now! That's because you've been automatically upgraded to Reason Plus, our new digital product that includes: Ad-free browsing at reason.com. You can browse the site without video ads, popups, overlays, and other distracting third-party advertisements. Invitations
     

We're Rolling Out Reason Plus!

29. Únor 2024 v 13:37
reason-plus | Ad-Free Browsing!

If you're a paid digital subscriber or a donor (of $50 or more) and you're logged into reason.com you should be seeing a beautiful, user-friendly, ad-free version of our website right now!

That's because you've been automatically upgraded to Reason Plus, our new digital product that includes:

  • Ad-free browsing at reason.com. You can browse the site without video ads, popups, overlays, and other distracting third-party advertisements.
  • Invitations to exclusive live online events featuring Reason journalists.
  • Commenting privileges on all reason.com posts that have comments enabled. That's right, to comment on reason.com posts (except for Volokh Conspiracy blog posts) you'll need to be a Reason Plus subscriber. Recent commenters have been grandfathered in with commenting privileges (but no other Reason Plus benefits) for the time being.

You also have all the benefits of a digital subscription including:

  • Early full access to the new issue of the print magazine—days before the print magazine is mailed to print subscribers and weeks before the content is published online for nonsubscribers.
  • Instant access to the newest print edition articles in a variety of mobile- and desktop-friendly formats, including regular text pages at reason.com, your choice of two interactive "flip style" digital readers, and downloadable PDF files for reading in the applications of your choice online or offline.
  • Instant access to all Reason archives going back to 1968 in interactive reader and PDF formats.

If you are not a digital subscriber, you can sign up for Reason Plus here and start getting all the benefits today for just $25 a year!

Current digital subscribers have been upgraded to Reason Plus at no charge for the duration of your subscription term. You'll be renewed as a Reason Plus subscriber after that. (As always, you are free to cancel your subscription for a prorated refund at any time.)

Instructions if you are having trouble:
Once you subscribe, please make sure you are logged in to your Reason user account here. You can recover/reset your user account password here using your email address.

If you do not have a reason.com account, please create one here.

Add your new Reason Plus subscription number to your reason.com account settings here and you're good to go! Your Reason Plus subscription number can be found in your subscription confirmation email.

And we are working on further changes to make your reason.com browsing experience even better. Please send any trouble reports to: digital-help@reason.com.

Enjoy Reason Plus!

The post We're Rolling Out Reason Plus! appeared first on Reason.com.

  • ✇Latest
  • Goodbye, NavalnyKatherine Mangu-Ward, Nick Gillespie, Zach Weissmueller, Eric Boehm
    In this week's The Reason Roundtable, Katherine Mangu-Ward is in the driver's seat, alongside Nick Gillespie and special guests Zach Weissmueller and Eric Boehm. The editors react to the latest plot twists in Donald Trump's various legal proceedings and the death of Russian opposition leader Alexei Navalny. 00:41—The trials of Donald Trump in Georgia and New York 25:04—Weekly Listener Question 33:23—Sora, a new AI video tool 43:55—The death of Al
     

Goodbye, Navalny

Framed memorial image of Alexei Navalny | Edna Leshowitz/ZUMAPRESS/Newscom

In this week's The Reason Roundtable, Katherine Mangu-Ward is in the driver's seat, alongside Nick Gillespie and special guests Zach Weissmueller and Eric Boehm. The editors react to the latest plot twists in Donald Trump's various legal proceedings and the death of Russian opposition leader Alexei Navalny.

00:41—The trials of Donald Trump in Georgia and New York

25:04—Weekly Listener Question

33:23—Sora, a new AI video tool

43:55—The death of Alexei Navalny

49:58—This week's cultural recommendations

Mentioned in this podcast:

"How a New York Judge Arrived at a Staggering 'Disgorgement' Order Against Trump," by Jacob Sullum

"Prosecutor Fani Willis Touts the Value of Cash, but What About the Rest of Us?" by J.D. Tuccille

"Trump Ordered To Pay $364 Million for Inflating His Assets in Civil Fraud Trial," by Joe Lancaster

"Alvin Bragg Is Trying To Punish Trump for Something That Is Not a Crime," by Jacob Sullum

"Alexei Navalny's Death Is a Timely Reminder of How Much Russia Sucks," by Eric Boehm

"Why Is Nike Stomping on Independent Creators?" by Kevin P. Alexander

"Bury My Sneakers at Wounded Knee," by Nick Gillespie

"Creation Myth: Does innovation require intellectual property rights?" by Douglas Clement

"A Private Libertarian City in Honduras," by Zach Weissmueller

"The Real Reasons Africa Is Poor—and Why It Matters," by Nick Gillespie

Bono's Ukraine Speech

"Justice or persecution? The Trump dilemma"

Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.

Today's sponsor:

  • ZBiotics. Pre-Alcohol Probiotic Drink is the world's first genetically engineered probiotic. It was invented by Ph.D. scientists to tackle rough mornings after drinking. Here's how it works: When you drink, alcohol gets converted into a toxic byproduct in the gut. It's this byproduct, not dehydration, that's to blame for your rough next day. ZBiotics produces an enzyme to break this byproduct down. Just remember to make ZBiotics your first drink of the night and to drink responsibly, and you'll feel your best tomorrow. Go to zbiotics.com/roundtable to get 15 percent off your first order when you use code ROUNDTABLE at checkout. ZBiotics is backed with a 100 percent money-back guarantee, so if you're unsatisfied for any reason, they'll refund your money, no questions asked.

Audio production by Ian Keyser; assistant production by Hunt Beaty.

Music: "Angeline," by The Brothers Steve

The post Goodbye, Navalny appeared first on Reason.com.

💾

© Edna Leshowitz/ZUMAPRESS/Newscom

❌
❌