FreshRSS

Normální zobrazení

Jsou dostupné nové články, klikněte pro obnovení stránky.
PředevčíremHlavní kanál

Glenn Loury on Economics, Black Conservatism, and Crack Cocaine

4. Srpen 2024 v 12:00
Glenn Loury | Photo: Ken Richardson

"All you need, besides the cocaine, is a lighter, water, baking soda, some Q-Tips, high-proof alcohol, a ceramic mug, and a piece of cheesecloth or an old T-shirt," writes Glenn Loury in his riveting Late Admissions: Confessions of a Black Conservative. The book is surely the only memoir by an Ivy League economist that includes a recipe for crack cocaine along with technical discussions of Karl Marx, Ludwig von Mises, Friedrich Hayek, and Albert O. Hirschman.

Born in 1948 and raised working class on Chicago's predominantly black South Side, Loury tells a story of self-invention, ambition, hard work, addiction, and redemption that channels Benjamin Franklin's Autobiography, Richard Wright's Native Son, Saul Bellow's The Adventures of Augie March, and Milton Friedman's Capitalism & Freedom. An alternative title might have been "Rise Above It!," the slogan of a pyramid-scheme cosmetics company on which he squandered his savings as a young man in Chicago.

Now a chaired professor at Brown University and the host of The Glenn Show, a wildly popular YouTube offering, Loury worked his way through community college, Northwestern, and a Massachusetts Institute of Technology Ph.D., became the first tenured black economist at Harvard, emerged as a ubiquitous commenter on race and class in the pages of The New Republic and The Atlantic, was offered a post in the Ronald Reagan administration, and was then publicly humiliated after affairs, arrests, and addiction all became public, threatening the end of his professional and personal life. With the support of his wife, Linda Datcher Loury (herself a highly regarded economist), Alcoholics Anonymous (A.A.), and colleagues, Loury managed to rise above it and not just rebuild his academic reputation and relationships with his children, but also gain a unique perspective on economics, individualism, and community.

Reason: When you say you are a black conservative, what does that mean?

Glenn Loury: Well, I think of a few things. One of them is thinking that markets get it right in terms of the resource allocation problem and that the planning instinct and centralized, politically controlled interference in theeconomy is suspect. Of course, there are exceptions. The general predisposition is that I like prices. I like laissez faire. AndI think the first and second fundamental theorems of welfare economics are true, that we get efficient resource allocation when we allow the interplay of self-interest. You know, classical liberal stuff.

That makes you a libertarian, not a conservative.

Well, I was going to go the Edmund Burke route. I was going to say not discarding everything that's been handed to me from the past generations. Respect for tradition, reverence for some of these things that we've been handed down. So when people can't define who's a man and who's a woman, I hold my wallet. I'm a little bit skeptical about this nouveau thing.

But the "black conservative" comes out of I think a reflex or reaction to the dilemma that we African Americans face as the descendants of slaves, a marginal population disadvantaged in various ways and struggling for equality, dignity, inclusion, freedom.

I think there's a trap in that situation: the trap of falling into a status of victim and of looking to the other, the white man, the system to raise our children and deliver us from the challenge which everybody faces of living life in good faith, of, as Jordan Peterson puts it, standing up straight with your shoulders back. Of confronting the reality that there's some stuff that nobody can do for you. This posture of dependence, these arguments for reparations, this invocation of structural and systemic [racism], when the real questions are of responsibility and role.

In your book you cover your education in economics, but it's also a memoir that traffics a lot with addiction, both with drugs and sex. Can economics explain addictive behavior and self-destructive behavior?

Well, I think of the late Gary Becker. He has a paper on addiction. And I think of George Stigler and Becker's classic paper "De Gustibus Non Est Disputandum"—about taste there can be no dispute. They do it all in terms of intertemporal preferences, where you build up a taste for certain kinds of pleasures, and you invest in them.

Did they get it right?

No, I don't think they got it right. I thought it was reductive, closed off. [It's an] "everything's going to be optimization; we just have to find the right objective function" way of looking at the world. I much prefer [game theorist and Nobel laureate] Tom Schelling's engagement with the problems of self-command, as he called it, and addiction, which was understanding the conflict within the single individual who at one point in time would want not to smoke or to use cocaine, but at another point in time would find themselves, notwithstanding their understanding that this is not good for them, being compelled to do it nonetheless, and the strategic interaction between those two types within the same person.

Some critics of capitalism say that drug addiction is the apotheosis of capitalism, that it creates a bunch of things that enslave people. But your story, in one way, is about learning self-command and control over self-destructive behaviors. Is there a larger lesson from your struggles with addiction and your ultimate triumph over it?

Yeah, A.A. saved my life. That therapeutic community, that halfway house I lived in for five months in 1988: They saved my life. I went to meetings faithfully for years. And I abstained. I was clean and sober for five years. But I eventually drifted away from the A.A. abstinence philosophy.

I did have a period where I was very religious. I was born again. This initiated during the period when I was struggling to recover from drug addiction but persisted long after I was out of the woods. It changed my perspective. The hope, the whole experience of going through rehab and what they did, it quieted me down. I started reading the Bible even before I was professing genuine religious conviction. I started memorizing passages after I began to confess some belief, going to meetings, living within myself, a kind of humility. I'm not in control. Let go and let God.

What is the work that you're most proud of as an economist?

I think my best technical paper was published in Econometrica in 1981. It's called "Intergenerational Transfers and the Distribution of Earnings." It applied what at the time were state-of-the-art technical methods in dynamic optimization and the behavior of dynamic stochastic systems to the problem of inequality. It formalized the idea that young people depend on the resources available to their parents, in part, to realize their productive potential as workers and economic agents. Investments made early in life by parents in children affect the productivity of children later in life. That productivity is also dependent on other factors beyond parental control that are random, but it depends on the resources that are available. There cannot be perfect markets to allow for borrowing forward against future earnings potential, so as to realize the investment possibilities. If a parent doesn't have the resources to fund the investment themselves, there's no place to go to borrow to get piano lessons for a kid who might develop into a virtuoso pianist.

As a consequence, inequality has resource allocation consequences. Some parents have a lot of resources; others have very little. But the kids all have comparable potential, and there's diminishing returns to investing in kids. The net result is that if you could move money from rich parents to poor parents and indirectly move investment in kids from rich families to poor families, the loss in the former would outweigh the gain in the latter.

Is that a rebuttal to the idea that you can rise above it on your own? Throughout your work you make a case that if we want a more equitable society, we have to do something to help kids whose parents don't have any resources.

I see them as two different realms of argument about human experience. On the one hand, I'm talking about how there can be market failures and incompleteness and informational impact. Illness and externalities and property rights are unclear, and things like that. And you can make arguments about a minimal role for government intervention to deal with public goods problems and environmental externality problems and perhaps market failures.

On the other hand, if I'm talking to an individual about how to live their life, about whether or not to delegate responsibility for their life to outside forces or to live in good faith, to take responsibility for what you do, that's existential, almost spiritual. It's how to be in the world as opposed to how the world works.

You're on college campuses now, and campuses are more fraught than they ever have been. Do you feel like that message has disappeared?

I think so, especially with the debate that's going on presently about the war in Gaza and the campus protests occupying spaces and setting up tents on the campus green and canceling graduations and seizing buildings and engaging in civil disobedience and whatnot.

But that all comes in the aftermath of the culture war that we've been fighting about critical race theory and diversity, equity, and inclusion. These arguments have been around for a while, and I've tended to be on the side of suspicion of the so-called progressive sentiment. There's too much focus on race and sex and sexuality as identities in the context of the university environment, where our main goal is to acquaint our students with the cultural inheritance of civilization. Their narrow focus on being this particular thing and chopping up the curriculum to make sure that it gets representative treatment feels stifling to me, especially if you let that spill over into what can be said.

The therapeutic sentiment. The kids have these sensibilities. We have to be mindful of them. We don't want to offend. We don't want anyone to be uncomfortable. No, the whole point is to make you uncomfortable. You came thinking something that was really a very superficial and undeveloped framework for thinking; I'm going to expose you to some ideas that run against that grain, and you're going to have to learn how to grapple with them. And in your maturity, you may well return to some of these, but you will do so with a much firmer sense of exactly what it is that you're affirming. I want to educate you. I don't want to placate you. I'm not here to make you feel better.

I do think there's too much reliance on system-based accounts and much less of an embrace of responsibilities that we as individuals have in our education, our politics, our social and economic lives.

What is the case against affirmative action?

The case against affirmative action: It's unfair to people who are disfavored. They didn't do anything to be in the group that you decided you wanted to put your thumb on the scale for. It has concerning incentive problems. If you belong to the favorite group, it's OK to have a B average and be in the 70th percentile of test takers. And you can get into UCLA or Stanford or Yale if you're black. But if you're white, you better have an A-minus average. And you'd better be at the 90th percentile of the test takers.

The systematic implementation of affirmative action amplifies the concerns that one might have about stigmatizing African Americans who would be presumed to be beneficiaries. This is the classic complaint of [Supreme Court Justice] Clarence Thomas, that his Yale law degree isn't worth anything because it's got an asterisk on it because of affirmative action.

There's something undignified about not being held to the same standard as other people and everybody assuming that because of the sufferings of your ancestors you're somehow in need of a special dispensation.I don't regard that as equality. You're not standing on equal ground when you're dependent upon such a dispensation. In the case of affirmative action, it's a Band-Aid. You're treating a symptom and not the underlying cause. The underlying reality is there are population differences in the express[ed] productivity of the agents in question. The African Americans, on average, are producing fewer people in relative numbers who are exhibiting these kinds of skills that your instruments of assessment are intended to measure. And if you don't remedy that problem, you're never going to get truly to equality.

Where are these population differences coming from? Is it primarily an effect of cultural change? Is it inherited differences in economic status and opportunity? Is it genetic?

I don't think it's genetic, though I can't rule out that genetics could have an effect. I'm just not persuaded by the evidence of the early childhood developmental stuff. I don't underestimate the differences in the effectiveness of primary and secondary education. This is not just race. This is race and class and geography and whatnot. I think we'd do ourselves as a society a lot of good if we were to follow the sort of wholesale reform movement in K-12, including charter schools and more competition to the union-dominated public provision sector of that part of our social economy.

But culture is a tough one. I give a lot of evidence indirectly in my memoir about the effects of culture on life experience. The culture that nurtured me coming up in Chicago had its positives. It also had its norms, values, ideals, what a community affirms as being a life well lived, how people spend their time, about parenting, things of this kind.

I read this book by two Asian sociologists, Min Zhou and Jennifer Lee, called The Asian American Achievement Paradox, and it attempts to explain, based on interview data from a couple hundred families in Southern California, how it is that these Asian communities are able to send their youngsters to places like Harvard and Stanford in such large numbers. And it basically makes a cultural argument. One of the chapters is entitled "The Asian F." It turns out that the Asian F is an A-minus, according to some of their respondents. I don't think you can discount the importance of that kind of cultural reinforcement, because at the end of the day what matters is how people spend their time.

You're a critic of race-based policies, but you also get kind of pissed when people dismiss the black experience. You say being a black American is a part of your identity. Is there a way for us to bring our individual cultural and ethnic heritage to the conversation that doesn't divide us or put us in one group or another?

We all have a story. We all have a narrative and a cultural inheritance. And yet underneath we are kind of all the same. Our struggles are comprehensible to each other, and our triumphs and our failures are things that we can relate to as human beings. And that's how we should be relating to each other.

I'm in my 70s now, and I've just written a book about my life. So who am I? What does it amount to? I'm the kid that really did grow up immersed in an almost exclusively black community on the South Side of Chicago. The music that I listened to, the food that I ate, the stories that I was told and that I told to my own children in turn. These things are related to the history, the struggles and triumphs, the dreams and hopes of African-American people. That's a part of who I am. And it annoys me when people attempt to say "get over it" to me. They're not respecting me when they tell me that race is not a deep thing about people.

It's a superficial thing, I grant you that. I grant you the melanin in the skin, the genetic markers that are manifest in my physical presentation, don't add up to very much. But the dreams of my fathers and others, the lore, the narrative about who "we" are, that's not arbitrary and it's not trivial. And it seems to me sociologically naive in the extreme to just want to move past that. That's a part of who people actually are.

But I struggle with this, because I also want to tell my students not to wear that too heavily, not to let it blinker them and prevent them from being able to engage with, for example, the inheritance of European civilization in which we are embedded. That's also your inheritance. Tolstoy is mine. Einstein is mine. And yours. I want to say to youngsters of whatever persuasion: Don't be blinkered. Don't be so parochial that you miss out on the best of what's been written and thought and said in human culture.

Photo: Ken Richardson
(Photo: Ken Richardson)

This interview has been condensed and edited for style and clarity.

The post Glenn Loury on Economics, Black Conservatism, and Crack Cocaine appeared first on Reason.com.

  • ✇Latest
  • Stephen Wolfram on the Powerful Unpredictability of AIKatherine Mangu-Ward
    Joanna Andreasson/DALL-E4 Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language.
     

Stephen Wolfram on the Powerful Unpredictability of AI

19. Květen 2024 v 12:00
An AI-generated image of | Photo: Julian Dufort/Midjourney
Joanna Andreasson/DALL-E4

Stephen Wolfram is, strictly speaking, a high school and college dropout: He left both Eton and Oxford early, citing boredom. At 20, he received his doctorate in theoretical physics from Caltech and then joined the faculty in 1979. But he eventually moved away from academia, focusing instead on building a series of popular, powerful, and often eponymous research tools: Mathematica, WolframAlpha, and the Wolfram Language. He self-published a 1,200-page work called A New Kind of Science arguing that nature runs on ultrasimple computational rules. The book enjoyed surprising popular acclaim.

Wolfram's work on computational thinking forms the basis of intelligent assistants, such as Siri. In an April conversation with Reason's Katherine Mangu-Ward, he offered a candid assessment of what he hopes and fears from artificial intelligence, and the complicated relationship between humans and their technology.

Reason: Are we too panicked about the rise of AI or are we not panicked enough?

Wolfram: Depends who "we" is. I interact with lots of people and it ranges from people who are convinced that AIs are going to eat us all to people who say AIs are really stupid and won't be able to do anything interesting. It's a pretty broad range.

Throughout human history, the one thing that's progressively changed is the development of technology. And technology is often about automating things that we used to have to do ourselves. I think the great thing technology has done is provide this taller and taller platform of what becomes possible for us to do. And I think the AI moment that we're in right now is one where that platform just got ratcheted up a bit.

You recently wrote an essay asking, "Can AI Solve Science?" What does it mean to solve science?

One of the things that we've come to expect is, science will predict what will happen. So can AI jump ahead and figure out what will happen, or are we stuck with this irreducible computation that has to be done where we can't expect to jump ahead and predict what will happen?

AI, as currently conceived, typically means neural networks that have been trained from data about what humans do. Then the idea is, take those training examples and extrapolate from those in a way that is similar to the way that humans would extrapolate.

Now can you turn that on science and say, "Predict what's going to happen next, just like you can predict what the next word should be in a piece of text"? And the answer is, well, no, not really.

One of the things we've learned from the large language models [LLMs] is that language is easier to predict than we thought. Scientific problems run right into this phenomenon I call computational irreducibility—to know what's going to happen, you have to explicitly run the rules.

Language is something we humans have created and use. Something about the physical world just delivered that to us. It's not something that we humans invented. And it turns out that neural nets work well on things that we humans invented. They don't work very well on things that are just sort of wheeled in from the outside world.

Probably the reason that they work well on things that we humans invented is that their actual structure and operation is similar to the structure and operation of our brains. It's asking a brainlike thing to do brainlike things. So yes, it works, but there's no guarantee that brainlike things can understand the natural world.

That sounds very simple, very straightforward. And that explanation is not going to stop entire disciplines from throwing themselves at that wall for a little while. This feels like it's going to make the crisis in scientific research worse before it gets better. Is that too pessimistic?

It used to be the case that if you saw a big, long document, you knew that effort had to be put into producing it. That suddenly became not the case. They could have just pressed a button and got a machine to generate those words.

So now what does it mean to do a valid piece of academic work? My own view is that what can be most built upon is something that is formalized.

For example, mathematics provides a formalized area where you describe something in precise definitions. It becomes a brick that people can expect to build on.

If you write an academic paper, it's just a bunch of words. Who knows whether there's a brick there that people can build on?

In the past we've had no way to look at some student working through a problem and say, "Hey, here's where you went wrong," except for a human doing that. The LLMs seem to be able to do some of that. That's an interesting inversion of the problem. Yes, you can generate these things with an LLM, but you can also have an LLM understand what was happening.

We are actually trying to build an AI tutor—a system that can do personalized tutoring using LLM. It's a hard problem. The first things you try work for the two-minute demo and then fall over horribly. It's actually quite difficult.

What becomes possible is you can have the [LLM] couch every math problem in terms of the particular thing you are interested in—cooking or gardening or baseball—which is nice. It's a sort of a new level of human interface.

So I think that's a positive piece of what becomes possible. But the key thing to understand is the idea that an essay means somebody committed to write an essay is no longer a thing.

We're going to have to let that go.

Right. I think the thing to realize about AIs for language is that what they provide is kind of a linguistic user interface. A typical use case might be you are trying to write some report for some regulatory filing. You've got five points you want to make, but you need to file a document.

So you make those five points. You feed it to the LLM. The LLM puffs out this whole document. You send it in. The agency that's reading it has their own LLM, and they're asking their LLM, "Find out the two things we want to know from this big regulatory filing." And it condenses it down to that.

So essentially what's happened is you've used natural language as a sort of transport layer that allows you to interface one system to another.

I have this deeply libertarian desire to say, "Could we skip the elaborate regulatory filing, and they could just tell the five things directly to the regulators?"

Well, also it's just convenient that you've got these two systems that are very different trying to talk to each other. Making those things match up is difficult, but if you have this layer of fluffy stuff in the middle, that is our natural language, it's actually easier to get these systems to talk to each other.

I've been pointing out that maybe 400 years ago was sort of a heyday of political philosophy and people inventing ideas about democracy and all those kinds of things. And I think that now there is a need and an opportunity for a repeat of that kind of thinking, because the world has changed.

As we think about AIs that end up having responsibilities in the world, how do we deal with that? I think it's an interesting moment when there should be a bunch of thinking going on about this. There is much less thinking than I think there should be.

An interesting thought experiment is what you might call the promptocracy model of government. One approach is everybody writes a little essay about how they want the world to be, and you feed all those essays into an AI. Then every time you want to make a decision, you just ask the AI based on all these essays that you read from all these people, "What should we do?"

One thing to realize is that in a sense, the operation of government is an attempt to make something like a machine. And in a sense, you put an AI in place rather than the human-operated machine, not sure how different it actually is, but you have these other possibilities.

The robot tutor and the government machine sound like stuff from the Isaac Asimov stories of my youth. That sounds both tempting and so dangerous when you think about how people have a way of bringing their baggage into their technology. Is there a way for us to work around that?

The point to realize is the technology itself has nothing. What we're doing with AI is kind of an amplified version of what we humans have.

The thing to realize is that the raw computational system can do many, many things, most of which we humans do not care about. So as we try and corral it to do things that we care about, we necessarily are pulling it in human directions.

What do you see as the role of competition in resolving some of these concerns? Does the intra-AI competition out there curb any ethical concerns, perhaps in the way that competition in a market might constrain behavior in some ways?

Interesting question. I do think that the society of AIs is more stable than the one AI that rules them all. At a superficial level it prevents certain kinds of totally crazy things from happening, but the reason that there are many LLMs is because once you know ChatGPT is possible, then it becomes not that difficult at some level. You see a lot of both companies and countries stepping up to say, "We'll spend the money. We'll build a thing like this." It's interesting what the improvement curve is going to look like from here. My own guess is that it goes in steps.

How are we going to screw this up? And by "we," I mean maybe people with power, maybe just general human tendencies, and by "this," I mean making productive use of AI.

The first thing to realize is AIs will be suggesting all kinds of things that one might do just as a GPS gives one directions for what one might do. And many people will just follow those suggestions. But one of the features it has is you can't predict everything about what it will do. And sometimes it will do things that aren't things we thought we wanted.

The alternative is to tie it down to the point where it will only do the things we want it to do and it will only do things we can predict it will do. And that will mean it can't do very much.

We arguably do the same thing with human beings already, right? We have lots of rules about what we don't let people do, and sometimes we probably suppress possible innovation on the part of those people.

Yes, that's true. It happens in science. It's a "be careful what you wish for" situation because you say, "I want lots of people to be doing this kind of science because it's really cool and things can be discovered." But as soon as lots of people are doing it, it ends up getting this institutional structure that makes it hard for new things to happen.

Is there a way to short circuit that? Or should we even want to?

I don't know. I've thought about this for basic science for a long time. Individual people can come up with original ideas. By the time it's institutionalized, that's much harder. Having said that: As the infrastructure of the world, which involves huge numbers of people, builds up, you suddenly get to this point where you can see some new creative thing to do, and you couldn't get there if it was just one person beavering away for decades. You need that collective effort to raise the whole platform.

This interview has been condensed and edited for style and clarity.

The post Stephen Wolfram on the Powerful Unpredictability of AI appeared first on Reason.com.

❌
❌