What's Left of Philosophy

80 | Grab Bag Special Episode with Michael Peterson! Utilitarian Harems, Nietzschean Ciphers, and Cowardly Chatbots

January 09, 2024 Lillian Cicerchia, Owen Glyn-Williams, Gil Morejón, and William Paris Season 1 Episode 80
What's Left of Philosophy
80 | Grab Bag Special Episode with Michael Peterson! Utilitarian Harems, Nietzschean Ciphers, and Cowardly Chatbots
Show Notes Transcript Chapter Markers

In this nonstandard episode, Gil and Owen are joined by Michael Peterson to talk about how dreadful utilitarianism is, consider some of the offers that folks have made to come guest on the show, and reflect on how deeply unimpressive LLMs are when it comes to actually taking a position. Just having some fun with it! Video of the recording is available to our supporters on Patreon.

leftofphilosophy.com | @leftofphil

References:

National Council on Disability, Response to Singer
https://ncd.gov/newsroom/04232015

münecat, "Sovereign Citizens: Pseudolaw & Disorder":
https://youtu.be/KcxZFmKrxR8?si=s3Xu_nH7dS6NkrWd

music:

Vintage Memories by Schematist | https//schematist.bandcamp.com
Connect by Astrale | https://go-stream.link/sp-astrale
START OVER by HYMN | https://get.slip.stream/g3FFTJ
My Space by Overu | https://go-stream.link/sp-overu 

Speaker 1:

2024 theory out, theory gone on Evidence out Evidence no no, no In results.

Speaker 3:

In isomatics and results baby.

Speaker 1:

Unmeasurable. Unmeasurable results in.

Speaker 3:

Hello, this is what's up to philosophy. I'm Gil here with me today. We've got the boy Owen up. We're doing something a little bit out of the ordinary today. For a variety of reasons, neither Lillian nor will could make it, so we thought we'd call in some reinforcements and do a sort of special episode. Got a few things lined up for us to talk about. I think it's going to be a lot of fun. So for today, we are very excited to be joined by a special guest, michael Peterson. Hey, michael, thanks for coming on the show.

Speaker 2:

Hey everyone.

Speaker 3:

Folks who came to our live event in Toronto back in October, we'll recognize Michael as our illustrious MC. I guess normally this is where I do like a substantive introduction laying out some of your work and why it makes sense to have you on to talk about the topic of the day. But that doesn't really make sense given the grab bag of silly stuff that we're going to cover. So I don't know. Could you maybe say a little bit about what you're working on these days?

Speaker 2:

Sure, I think my most important qualification for being here is that I went to grad school with both Owen and Gil, so it's a real pleasure to be here, just to bring the gang back together these days. I'm teaching at a university down in Missouri. I'm working on things like environmental philosophy specializations and things like Derrida and Gromsche, and I have a book coming out, I guess in 2024, about nuclear waste and shock Derrida. So you know, find that on whatever academic shelf is nearest to you.

Speaker 3:

Awesome. Well, yeah, thanks so much for joining us. I think it's going to be a lot of fun. I guess it's around New Year's. Did you guys do any like resolutions or anything like that? We talked about this last time.

Speaker 1:

Yeah, mine was to be more tired, to like embrace tiredness, except it, lean in. Yeah, lean in.

Speaker 2:

I was going to try to do a pescetary in January and then realize how much travel I had to do this January and I was going back home to Alberta where the beef is.

Speaker 1:

So it's criminal to go to Alberta and be a pescetary.

Speaker 2:

And not a beef, yeah, so instead I'm making gumbo today.

Speaker 3:

It's the opposite.

Speaker 2:

So I'll put the resolution off to February or something like that.

Speaker 3:

It's actually sort of fun to have you on for like a kind of special episode around this time of year, because the last two holiday specials were Like almost made specifically for you.

Speaker 2:

That's right. Yeah, the first one was Derrida on the gift and the second one was Gromsche on New Year's, so it makes sense that the third one I'm actually here for. Yeah, yeah, yeah.

Speaker 3:

We've been building up to this. That's right, all right, so let's get into it. So first up on the docket is the following burning question Utilitarianism, bad moral philosophy or worst moral philosophy. So I guess let's start out with a quick temp check how do you guys feel about utilitarianism in general? I'm not going to go worst.

Speaker 1:

I think bad is fair.

Speaker 2:

Yeah, I'm with Owen on this. I think bad, not worst, although maybe versions of utilitarianism can compete for worst. But in general I think it almost redeems itself heuristically in that it's the, I think, one of the best places to start if you're ever teaching ethics. Well, yeah, and just her functional reasons. It redeems itself a little bit there, that's true.

Speaker 1:

Just on the value of making teaching easier. It deserves a little more credit. The worst is obviously like by Traga and later Heidegger. Like my friend, that is far and away the moral philosophy.

Speaker 2:

Yeah.

Speaker 3:

Yeah.

Speaker 1:

All right, it's just too cool, the worst.

Speaker 3:

Yeah, I mean I will give it. That at least is what seems like an ethical theory. I've never understood why virtue ethics even counts. I don't know what anyone's talking about there.

Speaker 2:

Just look for people who are really good at the thing you want to be good at and they'll tell you what, why they're the best for that. Yeah, it's great, yeah.

Speaker 1:

Virtue ethics is the best for kind of self-help, though you know everybody needs a little. Everybody needs a little little self-help guidance. Yeah.

Speaker 3:

Airstall will help you out.

Speaker 2:

Yeah, yeah, and be mid is a great ethical principle.

Speaker 1:

Mid Exactly.

Speaker 3:

We've talked a little bit about utilitarianism in our episode on effective altruism, which are some of the basic conceptual terrain, but it's basically like ethics, math, right, we use our good, reasonable powers of reason to rationalize our way into knowing what's best to do by considering the outcomes of actions. We weigh up the good outcomes against the bad ones, and this gives us an objective, neutral and very rational way to figure out what the right thing to do is, since the right thing to do just means that which produces better outcomes. It's a really good non-circular definitions for sure.

Speaker 1:

You've been mad about the ethics math. Lately I've seen posts about it. I hate ethics, math, yeah, what is it about the math, like the math part of it that pisses you off the most? I mean, I get it, but I'm curious like what?

Speaker 3:

Well, it just doesn't. It just doesn't work. Like there's no standard right, Like one of the classic problems with utilitarianism is like what's our basic unit? Because we do have to do math at the end of the day, Like if you read Bentham, like if you read, like, the introduction of the principles of morals and legislation, he's like here's the step by step guide. You add up the number of pleasures and add up the number of pains and I'm like I don't know how to qualify this or quantify it. Right, there's no, there's no unit. So at that point, the math, it's a utile, it's a utile.

Speaker 2:

What's wrong with utiles? Yeah, it's no more arbitrary than minutes or seconds or pounds or degrees Celsius. Yeah, exactly so if you want to take on ethics math, I think you have to take on all measurable math.

Speaker 3:

Maybe I do. Maybe I do want to take on all measurements. No more measuring things. That's my resolution.

Speaker 1:

Well, no, no, bad infinity.

Speaker 3:

There we go.

Speaker 1:

What bad infinity measurement that's. The problem is that it's all aggregate like what Hegel calls a heap of effects. You know and you can't. That is the most like basic brain possible way of looking at effects.

Speaker 3:

Yeah Well, sometimes it gets cashed out in terms of pleasure and pain right, we want to produce the most pleasure and diminish pain as much as possible. And sometimes it gets cashed out in terms of the satisfaction of desires right, we want to satisfy as many as possible.

Speaker 1:

I just like to look at. Let's go back to the virtue ethics thing. Let's look at some of the models, the real practitioners and the people that have spent their lives laying out the theoretical foundations of utilitarianism.

Speaker 3:

Oh, we're going to check in on them in just a minute. We're going to get there in just a second.

Speaker 1:

Okay, okay.

Speaker 3:

This is because you're right. This is. That's the way the place to go, but I guess to give it his flowers. The basic utilitarianism laid out by Bentham in the 1700s was at least kind of radical for its egalitarianism. It argued, in the context of a stratified aristocracy, that the desires, pleasures and pains of the filthy masses were just as important for ethics, math, as those of the upper crust. And on the basis of that kind of secular approach to thinking about moral problems, bentham himself adopted some rather forward thinking positions for the time. He advocated for universal suffrage, the abolition of slavery and capital punishment, decriminalizing homosexuality. He even had some ideas about extending rights to animals, which might not count as a positive for Owen. I'm not sure that might not work for you. Yeah, we started up with them. Don't get me started. Today, though, and to your question, owen, the most prominent utilitarian moral philosopher is one, peter Singer, who uh, yeah, I'm just going to call him. He's a real piece of shit. This is a bad person.

Speaker 2:

But he did get very old, so if we're just judging consequences, and elderliness is good.

Speaker 3:

I mean, there's no way I'm going to live that long, so he must be doing something right.

Speaker 1:

There we go that's consequentialism A lot to prop up the kind of charitable industrial complex too, which is important.

Speaker 3:

Yeah, that's really good work. He's probably most infamous for being pretty open about his support for eugenics and in his 1985 book I can't believe this. Should the baby live Question mark? He openly advocates killing babies with disabilities on the basis that they have quote diminished prospects of enjoying an adequate quality of life quality of life, of course, being a term that never really gets defined all too clearly. I think that counts as a disability severe enough for singer to think that parents should kill their own children. Again, no clear definitions are given, but some of the specific conditions he names in his books include things like hemophilia, spina bifida and down syndrome. The man is a monster, right. This is. This is horrendous, evil shit. And don't think that these are like antiquated ideas that I'm cherry picking from four decades ago.

Speaker 3:

I got here an article published by the National Council of Disability in 2015 that was responding to some comments singer made on a radio program. Quote in the interview, which was, perhaps ironically, conducted as part of a press tour. Singers currently on promoting his new book about charities the most good you can do, how effective altruism is, changing ideas about living ethically. And the author advocated the shock and claim that health care laws like the Affordable Care Act, should be more overt about rationing and that we should acknowledge the necessity of quote intentionally ending the lives of severely disabled infants. Singer, who is known for his provocative and often contradictory views on animal liberation and infanticide, also repeatedly referred to disabled infants as it during the interview.

Speaker 3:

Without offering any scientific evidence to support what amounts to a return to eugenics, singer routinely contends quote the right to life is related to a beings capacity for intelligence and having preferences, which, in turn, is directly related to an undefined capacity to feel and or comprehend ethereal concepts like pain and pleasure. Without offering any concrete measure on how quality of life could or should be determined, singer admitted quote I don't want my health insurance premiums to be higher so that infants who can experience zero quality of life can have more expensive treatments. And quote I don't know, should we just abolish moral philosophy, right?

Speaker 2:

Yeah, it kind of makes me think whenever I think about singers kind of version of utilitarianism I think of, you know, also not the best dude in the world, but a really great anti utilitarian argument from Derek Parfit, namely his so called repugnant oh yeah, the repugnant conclusion, I don't know.

Speaker 2:

Yeah, so this is. This argument goes something like if the goal is just to maximize lives worth living, or happiness or pleasure and whatever utilitarian calculus you want to use, then the real way to kind of maximize this, the way to make this, to maximize the gains, so to speak, here, would be to find out what that minimal quality of life is and make sure as many people as possible right on the positive side of that minimum and then making sure as many people as exist to her on that level. So we'd want to have the maximum amount of barely worthwhile lives being lived as possible because on aggregate that's the maximum amount of good or pleasure or whatever. And Parfit of course is you know he calls this the repugnant conclusion. He thinks this is a good argument against utilitarianism. But I think singer might sort of be into this. He might, I suspect he might bite the bullet of the repugnant conclusion.

Speaker 3:

I always kind of thought Parfit bites that bullet too. Like he calls it the repugnant conclusion, but like he thinks that it is the correct conclusion, he's like I don't see any problem with the argument, I think, and I think he just follows that.

Speaker 2:

I read Parfit here is presenting this as a critique of the of the utilitarian moral framework, of a need for a framework that wouldn't lead us to the conclusion, because I think he does think that is the necessary endpoint of utilitarian logic. The extent to which Parfit, you know, continues to flirt with utilitarianism as a moral framework Maybe lens some credence to your point here, gil. But I I read that repugnant conclusion as a really nice argument against utilitarianism.

Speaker 3:

Just maximize minimally worthwhile lives, seems maybe like not the best moral theory is the worst version of min maxing right or the best.

Speaker 2:

I guess the best depends what you mean by worse there.

Speaker 1:

I just want to point out that it cannot be a coincidence, and it is true they're both utilitarians, but William McCaskill, right, the effective altruism guy, what is it with these people for whom animals are like a very, like a privileged, like ethical object I don't want to subject, because they're never subjects their ethical objects of great? It can't be coincidental that they also have the most deranged, like a historical, illogical ideas about ethics in general. And it's not, it's not coincidental, I just really come. Someone like Simone de Beauvoir just has like the fastest, most compelling refutation of all this crap, which is that, like the most core human value is very difficult to measure because it's transcendence, it's freedom, it's projection to an open future that you build collectively with other people. How the fuck would do? There's no new wave, like you can't make that numerical.

Speaker 2:

Again, it's just it's the dumb.

Speaker 1:

That's the bad part about the math. Brain is like you just want to. It's just people who thinks that their own brain, that think their own brains are so big that they just sit down and start just calculating up. This will affect this amount of people and this will affect that people then think that you can do that and arrive at anything that is remotely like philosophically sophisticated. What I don't get is like a singer is obviously an evil moron, but man, what the hell does it say about philosophy that this guy, this is just so much traction and he's still doing it.

Speaker 1:

He's still doing his. What's the journal? Controversial ideas.

Speaker 2:

Oh.

Speaker 1:

God.

Speaker 3:

What was his?

Speaker 1:

last. What was his last controversial thesis?

Speaker 3:

That's a great question, was it?

Speaker 1:

that like pedophilia was cool or something, or.

Speaker 3:

Yeah, wait, what was it?

Speaker 1:

Oh, never mind it was that sex with animals was cool? How about the pedophilia?

Speaker 3:

stuff. I don't want to.

Speaker 1:

I don't want to get to want to get arrested.

Speaker 3:

But we're pugnict to conclude that maybe if you fuck a horse it's fine just beastiality is just we should be open to that. Really, what's the problem there?

Speaker 1:

I mean, how can you? I mean, how can you? How spiritually wrought, wrought it out like do you have to be to spend your time like thinking about shit like this? One day in the back, back in the 70s or 80s or whatever it was, he was like you know how, when you walk by a pond and you see a little kid drowning, and even if you're going to get muddy, you should go in and save them. Well, you should be doing that all the time with people that are further away from you. Just because you can't see them doesn't mean you shouldn't save them and everyone's like oh my God, wow, this is the most incredible.

Speaker 3:

This is what that's?

Speaker 1:

wow, that's really, it's really insightful.

Speaker 3:

It really does serve as a common, most like refutation of like moral philosophy as an academic discipline as a whole, that this is one of its most prominent and famous practitioners. It's so, yeah, I don't know it's bad.

Speaker 1:

And what's the other? What's the other guy? Steven Pinker, oh, pinker, yeah, oh yeah, I see that guy around Cambridge and Somerville sometimes. Oh do you really? Yeah, oh really, it's the. Uh, that's cool.

Speaker 3:

Repugnant. He's probably going to have a good couple. I mean, we're about to get the Epstein list released. I bet she's going to have a have a nice time on the press. Junk it.

Speaker 1:

Just more confirmation that where I live I mean I don't live in Cambridge anymore, in Somerville but more confirmation that Cambridge is the most cursed place on earth. Okay, if you look at the actual proportion of people that were on the Epstein flight logs and like academics, where those academics are located Alan Dershowitz, pinker, all of others.

Speaker 3:

Well, we're about to see that, you know, utilitarianism can be used to justify such behaviors, because singer popped up in my Twitter feed a few days ago and that's why his horrible name is in our mouths right now. It turns out that for decades, old Pete has Excuse me allegedly had a quote her. So I've got a few paragraphs from a court document here that we're going to go through real quick. So between the end of 2002, in the beginning of 2003, don distance herself from singer. In May 2003, singer asked Don to work with him on a piece he'd been asked to write for the Los Angeles Times, for which she would receive co writing credit.

Speaker 3:

From 2002 through 2020, all of singers female co authors were women with whom singer had been sexually involved or to whom he had made clear his sexual interest. Despite a pattern of professional reward for sexual affection, singer wrote to Don that he believed he could only be accused of anything if an angry ex quote made something up or quote had a false memory. Don came to understand that she too would be rewarded for maintaining an affectionate relationship with singer with offers of prestigious work and would lose those offers without such expressions of warmth. Just to pause here real quick, this is part of the problem. Right like, can you show me on utilitarian grounds what the what the problem here is? And I think singer can just say like no, this is good. Right like, think of all the utils, the positive utils, that get generated from their careers are being.

Speaker 1:

Their careers are being advanced.

Speaker 2:

Yeah, everybody gets their beak wet, you know yeah, it's kind of wild like this, the kind of self admission here that spending time like being abused or groomed or allegedly taken advantage of by Peter singer is bad. But not as bad as a co authorship is good. Right, the co authorship there is really is really really a lot of heavy.

Speaker 3:

Going on. Don and singer became sexually involved again when working on the Los Angeles Times op ed, together with Don agreeing to be part of singers quote harem as long as quote. She was his favorite, the lead in his orchestra, as he called it, and quote. This is also disgusting. Don wondered if she should be trying to have a child with her partner and was reminded by singer that if she did, it would negatively affect her figure and would interfere with their affair. In 2003, singer told Don that while he still wished to be sexually involved with her, she had been replaced as the main recipient of his affections by a woman he had met at a conference in Europe who was 10 years younger than Don and who was married. Singer acknowledged the quote high risk that the affair would destroy the woman's marriage. So again, like he's just like, yeah, no, it's bad, of course, to break up a marriage. But like again, think of how, think of all the positive utils of me having sex with a young woman and then giving her a co-authored credit.

Speaker 2:

You must have done the math right.

Speaker 3:

You must have done the ethics map.

Speaker 2:

Really carefully, surely?

Speaker 1:

Would it absolutely kill a utilitarian just to have like a couple principles I know. I know principles are not, that's not their zone, but a few like you don't violate. Like the autonomy of someone else's will. No, that's a decent one. Like you don't. That sounds like D on the house and themselves. Maybe you ask whether you could, will your actions to be universe, the principle of your actions to be universalizable.

Speaker 2:

No, no, stop that. Well, it's kind of funny, right, like there's a version of utilitarianism that tries to save itself by being a principle of utilitarianism, namely that there's a benefit that outweighs a harm when you adopt a principle like that, right, and so you adopt the principle right up, right right Are those like rule utilitarians or some shit.

Speaker 2:

Yeah yeah, yeah, exactly, and it's kind of even the justification for that on utilitarian grounds is basically one of efficiency. You shouldn't have to make decisions every single time you need to make a utilitarian calculation, so it's better to make the decision once adopt a principle follow it knowing that following the principle tend to maximize, but you know this bottoms out as being principle is because you can make an exception to that principle, every single time on the basis of the plus and minus.

Speaker 2:

So, yeah, there's something kind of maybe Owen, maybe this language will help, will interest like. There's something kind of sovereign about the, about the utilitarian making the judgment where they retain for themselves the right to abandon or take up any principle or any consequence for any reason at any time, on their own ability to judge whether the benefits here are outweighing the harm. On whatever way they wanted to find those terms for utilitarian grounds in any given context, right, it's the ultimate, you know, sense of exception for any utilitarian making a judgment. It's really, I guess, empowering for a Peter Singh gets to decide at any one time what counts as being more beneficial for whom.

Speaker 1:

Yeah, that's, I think, the sovereign thing. That is right. They're just going to have like a bit of like board game brain where you're just like sitting on top of the globe moving pieces around, making sovereign decisions about. You know what should be allocated, where and to whom.

Speaker 2:

When he was describing the lead in the orchestra thing, I was also thinking about that. This whole language they were using sounded so fantasy football. Right, like I'm going to move you up to the first, I'm actually going to trade you away for this season while you're. While you're, they're starting a family, but if you hit the gym afterwards you can come back your front bench again. It's, which is also, like you know, the most math brain. Past time you could have.

Speaker 3:

And again, just like a ridiculous way to think that you're actually adjudicating complex moral scenarios. Like just ridiculous to think that you can do this sort of reduction. And again, the fact that it turns out to be so like wildly self-serving, surprises exactly nobody who's paying attention. So anyway, I guess that's it. There you go, Peter Singer, Final thoughts. But like you know, my heart goes out to the poor women he's been abusing. He's a fucking monster and an indictment of academic philosophy as a whole.

Speaker 2:

Well, with Kiss and Jern out dead, we need someone to be the I can't believe they're still alive person. So, maybe, maybe 2024 is the year where we can think about putting Singer on that.

Speaker 3:

I'm a good over that.

Speaker 2:

I'm a vacant throne, All right. Well, let's lighten it up a little bit.

Speaker 3:

I thought I might be going to peel back the curtain somewhat and share some of what goes on behind the scenes when you run a wildly successful, incredibly famous and highly regarded podcast like ours, because one of the things that happens behind the scenes is that your inbox is constantly getting flooded with unsolicited requests for people to come be guests on the show, and some of these people are complete weirdos.

Speaker 2:

But sometimes that works. Sometimes it works Like Michael. Sometimes you get a Michael and then you get me on the podcast.

Speaker 3:

He's been emailing. Michael's been emailing me once a week. Please have you come on the show.

Speaker 2:

Yeah, yeah, for three years.

Speaker 3:

I've been on the show with you on forever and you've been saying no, okay. So disclaimer time. I'm going to be changing the names of these people because I don't want anyone to like get outed here. I mean, I think this is basically fair game, since they're all more or less public figures to some degree. They've got websites and podcasts and books, and they explicitly hit us up to try to get even more publicity, but this is certainly not the sort of press that I think they wanted to get, and that's that. I'm sure there's enough info that we're going to be sharing here for you to go find them, and I want to be absolutely clear about this Leave these people alone. Do not go looking for them. This is a goof in a gaff for having a little lighthearted fun, and if you bother anyone in any way, I will personally come throw a cup of water in your face and break your PlayStation with a hammer. That said, this first email is about a guy I'm going to call Dr James White. So here's the email we get.

Speaker 3:

Dr White recently had a research residency where he discovered a cipher hidden deep in Nietzsche's philosophy, and he's now currently working on a code book to unravel all of Nietzsche's work. Nietzsche's work has been subject to a great deal of interpretation and debate, with scholars and readers often disagreeing on the meaning of his ideas, which has led some to speculate that there may be hidden meanings or messages in his work. I really love the like some. It's like real ancient aliens hedging Some. Some speculate that. So here we go. Dr White found the cipher by looking at Nietzsche's work through the lens of moral injury and crimes against humanity, although Nietzsche did not use the term moral injury which is a weird place to go for that Very like maybe the last person.

Speaker 3:

Yeah, like we got to become bridges to the overman right and, as he says, of humanity is, with all things that are going under, the best thing to do is give him a push. It's like I don't know if this is the guy you want to go get your ethics of crimes against humanity from. Although Nietzsche did not use the term moral injury in his work, his ideas about morality, guilt and self transformation are relevant to understanding the psychological and spiritual harm that can come from exposure to experiences that violate one's deeply held moral beliefs and values. On your podcast, you would love to talk about how the cipher is linked to everything ever written by Nietzsche, how it takes away all the subjectivity of his writings, and how to apply the cipher to his work, to how the cipher comes from Saint Augustine's number chain one to 17.

Speaker 3:

According to Saint Augustine, one of Nietzsche's influences, the universe is structured hierarchically, with each level of existence corresponding to a particular number in this sequence. So, okay, we've got like a 17 level, structured, numerically ordered hierarchy of being a being. I guess this is also why, of course, previous academics have misunderstood Nietzsche and he's he'd love, on the podcast, to come talk about the definitive meaning and utility of Nietzsche's masterwork. So there's a lot of that stuff there being used as the framework to define moral injury and provide commentary on the current Western psyche. So yeah, what do you guys think? This sounds pretty good, right?

Speaker 2:

I love, love the idea of reading Nietzsche and thinking this guy probably hid some stuff in there that he didn't want people to find easily. This is a person with chapters like why I write so well, why I am the greatest, how I am the anti-Christ, and you're like do you know what? I bet there's a hidden meaning in there that he didn't want people to know. He was a subtle, subtle, subtle man and much of his creepy thoughts were hidden.

Speaker 2:

A cipher like a decoder ring that tells you what Nietzsche really thought. I think that what Nietzsche really thought is that he was awesome, that Nietzsche thought Nietzsche ruled that Nietzsche was pretty tight. Yeah, and he wanted everyone to know.

Speaker 1:

Did they give us any indication of what the cipher is, or we have to read the book.

Speaker 3:

I think we're going to need to read the book and or have him on to let us know. The email didn't include the decoder ring, so I'm not Right. Yeah.

Speaker 2:

Or the glasses.

Speaker 3:

Yeah, the obey glasses from they Live. So I still don't know how to read Nietzsche. I don't know what Nietzsche's whole deal is. I've been reading him wrong.

Speaker 1:

I think there should be more of this Da Vinci-brained elements of philosophy included, where you can go on a little scavenger hunts. And I don't know you know you're reading Sartre you go on a little scavenger hunt and, like the Saint-Germain-des-Prés, like in Paris, and just go around his neighborhood and you look for all these different keys to what he really meant. Yeah, absolutely, it's good yeah.

Speaker 2:

Right yeah, if you circle the first letter of every paragraph in Ikemen in Jerusalem, you're going to end up finding, you know, the map to where a rent hid her book. That really tells you what he was, absolutely.

Speaker 3:

Yeah, they're doing a cross-dix. I, unironically, would love this. No, it's like the coolest version of like Straussian, like esoteric reading, right, except like it literally is. It literally is just like he said, Da Vinci code, like a cypher. What the hell would you hide? I don't know. It's awesome.

Speaker 2:

So I guess cyphers might exist in, you know, say, a medieval text or whatever, if someone was worried about a heresy that they wanted to be able to communicate because they thought it was true and didn't want it to be observed. So I guess my question for this person would be why they think Nietzsche was hiding messages instead of saying it overtly. What is the authority Nietzsche was trying to dodge?

Speaker 3:

It's also very funny again that this would be Nietzsche who would feel the need to hide his intent, because the guy didn't seem like he was particularly worried about making enemies. He just like ran around insulting everybody all the time.

Speaker 2:

Right. Maybe it's because it's so against his vibe. It goes against his image, Right, and in his text he's all full of himself and big into making enemies. But his secret message is actually resentment is kind of good and it's hate. Love one another.

Speaker 3:

Christianity's all right with me yeah. Yeah.

Speaker 2:

So then, when he dies and he gets to heaven, which he clearly believes in in secret, he can say no, I had all of the truth here in the book, but for marketing reasons.

Speaker 1:

I couldn't really tell anyone.

Speaker 2:

Yeah, yeah.

Speaker 1:

So it's a whole branch of I don't know kind of hermeneutical approach in scholarship that does this kind of like. Is this person a crypto, something Right? You know? I see it like Foucault's a crypto, like liberal, or crypto Like if people do it with Marx too. So it's not really that. I guess, like maybe this is a little bit quack of a version, but it's not that far.

Speaker 3:

There's one of the guys, as Paul recurred told us, who taught us the hermeneutics of suspicion. And why wouldn't we also apply the hermeneutics of suspicion to Nietzsche's own texts? You know, you know when you really think about it when you really think the greatest trick Nietzsche ever pulled. Convincing the world he didn't exist.

Speaker 2:

Convincing the world that he was a bro.

Speaker 3:

So a lot of these sorts of emails that we get are, as you may have noticed in that email that I read, not addressed to us directly from the people who are prospective guests, but are like podcast networking services that are like reaching out on their behalf.

Speaker 1:

Yeah, that's why I feel confident nobody's ever going to see like related to any of this, is ever going to hear any of this, because nobody. It's so obvious they've never listened to an episode.

Speaker 3:

Well, yeah, no doubt. But I wanted to know more. So I tracked down this person's actual website to see what their own writing looked like, because this was a sort of manicured, polished presentation of the idea. So are you ready for what it actually sounds like? This is from the website's mission statement. It says, quote our mission is the unifying definition of the moral injury construct to address suicide in wider civilian populations.

Speaker 3:

The provision of a universal lexicon can articulate moral grammar for the epic defining emergence and psychological articulation of moral injury. Moral injury is a disruption to the psyche from the will to truth that has become conscious of itself as a problem in us Deeply embedded substructure psychologies that, for the West, emerge over 2000 years as a synchronous psychological concomitant of the Christ figure. For to be sure, to be sure, Nietzsche would have agreed with Jung when he says quote, man can suffer only a certain amount of culture without injury. So this project will articulate a moral grammar as described by Friedrich Nietzsche.

Speaker 3:

Alexekon obtained from language, concordance, the AEIOU Bible method and Vendata philosophy Away suffers that conform a relationship with the four main questions concerning strength in the inner being. For the Christian psyche, what is the breadth and length and height and depth. While soldiers provide the cold-faced psychology for moral injury today, just as PTSD before, such an assignation will not be restricted to such cohorts. Moral injury represents the great difficulty looming for the contemporary Western consciousness, where not only the rationality of millennia but also its madness breaks out in us, where it is dangerous to be in air. Alright, so, that cleared it up. Do you have any further questions?

Speaker 1:

I understand a lot of that, but is this supposed to be like some anti-Christian thing too? You or Nietzsche, provide some resources for some.

Speaker 3:

Very unclear what the relationship between Christians is. There's something about the Christ figure in there, the Christian psyche? I'm not totally sure.

Speaker 2:

It's very apocalyptic. Moral injury isn't only a problem, it's actually a threat that risks overcoming all of us. It's coming. It's the asteroid labeled moral injury hurtling towards us.

Speaker 3:

We'd better work out this moral grammar using the AEIOU Bible concordance as quickly as possible.

Speaker 2:

We're just listening to. Don't want to miss a thing from Aerosmith and frantically translating the cipher trying to avoid the moral injury end of the world.

Speaker 3:

Yeah, I wonder if you could take the Nietzsche cipher and apply it to non-Nietzschean texts. Right, could you take the Nietzsche cipher and read the Gospel of John with it and see if that brings some Nietzschean delight.

Speaker 1:

I think there's no way that this cipher-based approach to philosophy texts can possibly be any less productive than the Peter Singer shit we were just talking about. At least this one is fun. It's definitely more fun.

Speaker 3:

Yeah, it's definitely more fun.

Speaker 2:

A bit of a mental adventure. I guess I'm still kind of confused about some of this stuff. But maybe my biggest confusion is, I'm not sure, is the language of cipher just the language of? I'm trying to find hidden tendencies, or, like Owen was describing reading Marx or Foucault trying to find out what their actual commitments are? Or are we talking about a Da Vinci code cipher with letters and codes and connections and the decoder ring? What is the cipher doing here, almost literally? I want to know, literally.

Speaker 1:

Yeah, is there a conspiracy in the background? Here Are we doing anti-Semitism.

Speaker 2:

I feel like it's probably in.

Speaker 3:

German, then that's a good question, or is he coding in English? I feel like you would have to be reading it in German, right?

Speaker 2:

If the clave is Because otherwise Walter Kaufman?

Speaker 3:

would do something really fucked up in the 60s where he like, did he scramble the cipher too? I wonder.

Speaker 2:

Right, maybe the reason the Cophmonian heresy of the 60s has really changed the gospel of Nietzsche for us 100%.

Speaker 3:

All right, well, that's all pretty good. That's one guy down and, like you said, at least it's fun and it gets more fun.

Speaker 1:

That's good. I never actually read the whole of those emails, so it's nice to really dig in a little bit and see what you should read.

Speaker 3:

Yeah, what am I?

Speaker 1:

missing.

Speaker 3:

It's very, very educational. What if I told you, though, that there's secret legal jiu-jitsu that you could be doing to avoid all of the duties of being a US citizen? What if I told you that you have been lied to about every contract you've ever signed? Well, good news A guy I'm going to be calling Steve Phillips is promising to show us the way out. So this is the email that we got. First sentence no hello, just we jump right into it. At first glance, legally disconnecting yourself from the United States citizenship and declaring state national status might seemingly be the actions of a so-called stupid fuck like Steve Phillips. Parentheses his words, not mine. The irony is he's probably the smartest person you'll talk to today.

Speaker 1:

He's quoting someone quoting like wait hold on, steve Phillips is the guy we're talking about, right?

Speaker 2:

Yeah, okay, start for the beginning.

Speaker 3:

So okay, once again we're getting an email on behalf of Steve Phillips by some kind of agent.

Speaker 1:

Yeah, got it Trying to book us.

Speaker 3:

So here's the sentence again. At first glance, legally disconnecting from your United States citizenship and declaring state national status might seemingly be the actions of a so-called stupid fuck like Steve Phillips. Parentheses his words, not mine. The irony is he's probably the smartest person you'll talk to today. If it's one thing that Steve has realized after learning from his failures is to say to hell with authorities, take your power back and legally kill every sleazy government issued contract you've ever signed. You think I'm joking. No, this isn't a pyramid scheme or Nigerian's bride spam mail in your inbox. No, by the way, anytime your email includes the disclaimer hey, by the way, this isn't a pyramid scheme, this isn't a Nigerian bride scam. That really settles the consciousness right. I'm feeling confident now.

Speaker 2:

Right, you thought it was, but now you know Now we know better. It's not.

Speaker 3:

Going on. Steve has found a way to legally not pay taxes, legally avoid fines, teach others what they are legally contracted to and escape government oversight entirely. In your profession, I know that you hate the government as much as Steve does, and now you can do something about it. Hi Lillian, my name is Rebecca and I'm reaching out to see if what's up the philosophy would consider jumpstarting an overlooked and daring conversation on what gets lost when we unknowingly enter into predatory contracts with the government and other authoritative bodies. Steve has studied the boring terms and conditions of every contract ever signed because, let's face it, you didn't read them.

Speaker 2:

Every contract ever signed.

Speaker 3:

Yes, he read the terms and conditions.

Speaker 2:

Oh no.

Speaker 3:

Why aren't contracts ever written in plain English? So now we're getting into like, if you're familiar with this sort of fringe conspiracy part of the world, this is like sovereign citizen stuff, right, like you are, in fact, like a sovereign citizen, but you've been tricked into like accepting bullshit like maritime law and like fake contracts, but you can indilitate those and just declare yourself a free person. Why aren't contracts ever written in plain English? By design, you always have to hire someone to interpret the legalese gibberish and translate it for you In his free contract killer course. So we're selling a course too.

Speaker 3:

Steve takes you by the hand and guides you through the labyrinth of deliberate government obfuscation meant to distract you from the truth. It's time to stand up against the ruthless intimidation of the government, to become the David against Goliath. The unjustly accused and wrongly convicted now have a chance to break free of their chains from the system created by and for lawyers to get in the know of the secrets in the contract you just signed. I also love imagining like am I just like running around signing contracts like constantly and this guy's?

Speaker 2:

like no, stop it. You're giving him so much more to read. Every time you sign a contract, he has to get a copy. He has to read the terms and conditions.

Speaker 1:

Does that mean, is you reading all the terms and conditions of all my software and shit?

Speaker 3:

All your software and user license agreements.

Speaker 1:

Should I just click agree, agree, agree.

Speaker 2:

Every contract ever signed, because again.

Speaker 3:

you didn't read it. You didn't read it, so Steve has to.

Speaker 2:

You could print something out right now, sign it, he would have to know that he's in the social contract.

Speaker 1:

He's already tacitly agreed to it, like Hobbs settled this a long time ago.

Speaker 2:

No, no, no. Have you read the terms and conditions of the social contract, Owen.

Speaker 1:

So have I read the terms? No, I haven't. I guess I haven't read the terms in terms of the social contract Problem is it's always changing.

Speaker 3:

There was a change in that.

Speaker 2:

Well, they're not allowed to change the terms and conditions after you sign it.

Speaker 1:

It used to be like you know, you grant basically obedience to the state and to the capitalist system because you get like a house and health care and all these various different things. But now, like what you know, again the contract changes. What are the terms now? You get a pile of debt, no health care. Life is shit. So maybe I could use a little bit of this up to someone going to someone remind me what are they going through? What?

Speaker 3:

are the terms and conditions? Well, if we got Steve onto the podcast, the email promises that he could discuss the following topics with us A US citizen versus United States of America citizen and how it affects your basic rights. But yeah, again we're doing like these people also believe in, like an absolutely magical power of words, completely peculiar binding, like cosmologically binding, like if you, if you declare yourself a US citizen, you're fucked, but if you say you're a United States of America citizen, then you're free and you don't have to abide by contracts anymore. Right.

Speaker 1:

I mean, that's kind of what Ricky does in trailer park boys. That's like he has, like he's just assumed the park as his symbolic and political universe and it's just totally withdrawn from the Canadian state and yeah.

Speaker 2:

I mean I guess they've all done that Right, they do the same thing.

Speaker 3:

Yeah, they're trying to make citizen citizens arrests. Maybe Steve got to them, because he also could teach us how to legally disconnect from the United States of America parentheses, aka the matrix. He could tell us how to kill your contracts and get your time and money back. And finally, is a state national real and how does it affect their tax burden, which is a question I would really like to get to the bottom of.

Speaker 2:

So I want to. I want to. I'm not an American citizen, I'm a Canadian.

Speaker 3:

Another money Disgusting Canadian.

Speaker 2:

And and get. Yeah, so Owen and Owen and I are both Canadian here, not not American. So my question is is this transferable? Because if the United States of America is the matrix, does that mean that?

Speaker 3:

by virtue of being.

Speaker 2:

Canadian Owen and I were actually born outside of the matrix, In which case, Gil, I've kind of feel bad for you.

Speaker 3:

First of all, you should always feel kind of bad for me as a US citizen, sort of generally speaking, kind of as a sad, as a sad broke boy and the dying imperial hegemon, like it's not fun here as you, as you.

Speaker 1:

This sounds like kind of hot and sexy. You know this whole withdraw, disconnecting from the state you're a part of and all of its obligations and whatever. But isn't that just kind of what, like, rich people have always done? Like you know, they don't pay taxes, they don't have to go fight wars, they, you know it's. I feel like this is already a code that's been cracked.

Speaker 3:

Yeah, but Steve's bringing it to the masses. He's trying to bring it to the masses.

Speaker 1:

Oh, I see. So he's like democratizing, we're democratizing.

Speaker 3:

We're democratizing, we're not being. Rich people can hire the lawyers. Yes, rich people can hire the lawyers that.

Speaker 2:

let you read the terms and conditions, and he's going to do it for you. It sounds like he's selling it's for a fee, so you do have to pay Steve Phillips to do it.

Speaker 3:

It is a little bit of an important but importantly, he's not a lawyer.

Speaker 1:

Not a lawyer. I really appreciate it. I appreciate it. He's not a lawyer which?

Speaker 2:

here is a good thing. It means you're not paying a lawyer.

Speaker 3:

We hate paying a lawyer. This is much better. We hate paying lawyers.

Speaker 2:

Yeah, I unironically think you, on what's left, should do a sovereign citizenship episode.

Speaker 3:

We probably should actually.

Speaker 2:

I really would love to hear a little bit more about all of this. Has it been laid out? Has that robust?

Speaker 1:

case been made. I mean, what would we use? There's some literature you might recommend for the sovereign citizen tradition.

Speaker 3:

I will say that if anyone is interested in learning more about sovereign citizen stuff, there is an excellent video on YouTube by a woman named Mewncat. I'll link it in the description. It's like a very comprehensive analysis of the ideology and all of the completely wild beliefs that these mostly confused people have onboarded as part of a very, very strange worldview in which they're being constantly oppressed because they have to pay fines when they speed through red lights and stuff like this. You should have her on that, maybe. I would love that.

Speaker 3:

All right, so those first two guys were just kind of fun cranks, right. But this next one's a bit different, so I'm not going to bother to change the names here, because while those last two guys, I think, are fairly harmless, if a little bit greasy, this one seems like it's actually like a cult and it looks like they've got some money behind them. So the email subject line is are modern day philosophers just scam artists? I can't tell you how fast I clicked on that because I was like yeah, one thousand, oh man, all right, hey, lillian, I love that they're all addressed to Lillian.

Speaker 1:

Why is Lillian always getting on there? It's great. It's the funniest thing that it is addressed to Lillian, because Lillian, I can just imagine with the most impassive, like stone faced, dismissal of this idiocy. Oh yeah.

Speaker 3:

Oh yeah, the person with the least tolerance for this kind of nonsense.

Speaker 1:

Yeah, anyways, go ahead.

Speaker 3:

Hey, lillian. Amidst the chaos of this culture and political moment, many quote philosophers and self-help gurus promise to give people what they need to cope and grow. Instead, they distribute endless streams of content, workshops and tools that leave people stuck in their shallow work minds and bodies. Andrew Daniel, best selling author and co-director of the Alan Watts Foundation, is the opposite of that. Okay, so just keep that in mind. Right? We've said that there are like fakes philosophers and self-help gurus who are you know that's just out to make a buck. They're not actually trying to help you. This guy's the opposite of that.

Speaker 2:

Yeah, he's like Alan Watts.

Speaker 3:

Yes, exactly. His philosophical wisdom, combined with his technical abilities as a cynosomatics practitioner, gives people what they need to accelerate through the path of awakening and enlightenment so that they can feel free despite every oppressive energy or thought that could come their way. Could we book him as a guest on what's left of philosophy to discuss his take on philosophy and shadow work is helping people grow through all the chaos surrounding us today. So obviously the answer is yes. I want this guy on right away. But I did want a little more information than that. So I went to the cynosomatics website because I didn't know. I don't know about you guys, I wasn't familiar with cynosomatics personally, so here's how they describe it Cynosomatics is a feeling based transformation process utilizing video and movement to assess and resolve stuck somatic, psychological and emotional patterns. Okay, so I could tell, for instance, like you're kind of frowning a little bit, michael, and I can tell that you've got some like stuck somatic and psychological patterns. What if I told you there was a way to use video technology to unstick you, to unstick your?

Speaker 1:

patterns. This is what no psycho analysis does to a motherfucker.

Speaker 2:

Yeah, Is it? Is it? Is it this just someone who watched a clockwork orange and thought, holy shit, this is a great idea, oh yeah.

Speaker 3:

Well, the union of cinematic video, embodiment feedback and feeling based awareness facilitates unprecedented accuracy, speed and results. Andrew Daniels holistic technology offers a verifiable, grounded method for practical healing, transformation and advanced spiritual development. Now, by the way, the website is littered with like creepy screenshots of people like looking directly into their webcam and what looks like zoom calls, and the website's always talking about their proprietary technologies for, again, healing and transformation. Here's the best parts of the website's FAQ, because I will admit I still had some questions. I mean, this all sounds great, but I still had a couple of questions.

Speaker 1:

So is this like is this the therapeutic? Just for clarification, like kind of focus on the therapeutic value of like zoom calls and shit and Skype.

Speaker 3:

It's sort of hard to. It's sort of hard to explain, but yes, you can use video feedback, like looking at yourself on video while you do therapy sessions to like to get immediate knowledge about, like your body. This is synosomatics, okay.

Speaker 1:

Yeah.

Speaker 3:

So you know what makes this so different. Our combination of proprietary video and cinematic technology centered around feeling based movement upon a bedrock of advanced psychological, spiritual and philosophical embodied wisdom makes synosomatics truly one of a kind. We are not analyzing body language, nor are we doing face reading, micro expressions or readings or anything of the sort. It is highly accelerated, with no upper limit of transformation, while allowing each student to go at their own pace. Veterans and newbies teach and learn the same. What participants get from the work is what they put in. The depth and texture of material and content are vast. Where participants progress by go deeper rather than go upward. It's the only place where you can see yourself on screen doing the behaviors you literally get to see proof of for all the feedback about you, in you, for yourself. So now I know you're wondering is this evidence based? Well, again, from the website, every piece of quote evidence can validate or invalidate a hypothesis or theory at an item. So we're just no evidence.

Speaker 2:

We're just darned into evidence Because it can be used anyway, there's evidence for anything, bill, you can just find evidence.

Speaker 3:

For any claim for any hypothesis For anything.

Speaker 2:

Any theory can be validated.

Speaker 3:

So fucking, what good is evidence? Yeah, get rid of it. And when it comes to therapy, evidence tends to be unreliable and subjective.

Speaker 1:

Synosematics is not evidence Vibes, by the way, plus Zoom. Yeah, vibes plus Zoom, vibes plus Zoom.

Speaker 2:

Vibes is too close to evidence. Well, true, vibes is a kind of evidence. I want nothing.

Speaker 3:

No, no, no, no it's not nothing, because synosematics isn't evidence based. It's results oriented, meaning the focus is on helping clients in real life have a better life in practical, tangible ways. Either way, this is also pretty much consequentialism, right. The approach has been based on clinical, experiential work with direct feedback and evidence of its efficacy. So now I guess evidence is back. I'm kind of confused.

Speaker 2:

OK, great, but this is clinical evidence.

Speaker 3:

I guess it is not theory, nor is it faith-based. It is a results-based approach. If it doesn't work, we don't do it, hoping it eventually will. I mean, I guess that's nice, we are willing to.

Speaker 2:

How do they do it the first time and with the evidence that it didn't work?

Speaker 1:

That's my question.

Speaker 2:

They didn't have a theory. They just had results, so they had those results before they even started.

Speaker 3:

Well, here's the last sentence, the last sentence here maybe answers your question, owen. Results of inner shifts are self-evident, but they are also on video as evidence for the client, as the ego is an opportunistic invalidator, right? So OK, so your transformation is self-evident, but your ego keeps tricking you, so that's why we point the camera at you and then you can see that the evidence is real. Oh, right.

Speaker 2:

Isn't the camera, the real self?

Speaker 1:

I just love how the language of like multi-level marketing schemes and, I don't know, selling like shitty subprime mortgages is just like Just let sales, kind of language is just totally infiltrated. Every facet of people, absolutely.

Speaker 3:

Yeah, it's real dark like nexium church of Scientology vibes. I'm getting from this. It seems pretty evil, I have to say. So we will not be having Andrew Daniel on to share CineCymbalax Again. I just think it's very funny that these people would like have pitched this to come on our show where, like a bunch of cranky Marxists like doing historical materialism and he's like what if we used a videotape to fix your inner transformation vibes?

Speaker 2:

It's like OK, you know well, they took the title of the podcast to be an actual question, right Like what's left of philosophy, and the answer is Cinematics, cinematics, that's what's left. Yeah, theory, it's like this 2024 theory out Theory.

Speaker 3:

God Evidence Out, evidence no, then results Cinematics and results Unmeasurable, unmeasurable results in.

Speaker 2:

Yeah, I have a new New Year's resolution. It's going to be to be more results, or less evidence based in all of my dealings.

Speaker 3:

There's a lot more. I'm just going to list off a couple of other truly baffling suggestions. One prospective guest is a quote facilitator of men's groups. He says he's doing like a new masculinity thing. It seems like a real in so kind of grift. On his website the guy brags about how he's been cheated on a lot and he's here to help you out. That is so bleak.

Speaker 1:

It's just a guy just using the like, resentment and pain of having been cheated on a pile of times to fuel some like scam, like, yeah, student philosophy career.

Speaker 3:

Then we had another one who is a former, like a lapsed finance bro, who just recently has gotten really into stoicism and he self published a book about the nature of the soul.

Speaker 1:

He's the first one.

Speaker 2:

The first lapsed finance, bro to get into stoicism ever.

Speaker 3:

He's great and again, remarkably, one person who reached out was actually a right libertarian McKinsey consultant. Oh hell yeah. Which basically means he's one of the most actively evil people on the planet.

Speaker 1:

Which means he's a McKinsey consultant. First of all, the whole right libertarian part is completely redundant.

Speaker 3:

Yeah, there's no need to specify that, yeah.

Speaker 2:

Yeah. To answer your question earlier about whether utilitarianism is bad or the worst moral theory, no, this is the worst moral theory, yeah being a McKinsey consultant, yeah. Yeah.

Speaker 3:

So some of the wilder highlights from the what's left of philosophy inbox. Please continue to send us your deranged ramblings. It makes me happy, if nothing else, and we will not be having almost any of you on ever.

Speaker 2:

Can I share one of my more deranged emails really quickly? I don't have it, I just I couldn't find the copy, but I think I shared this with you, gil, when I received it. I so I started publishing stuff on, you know, nuclear waste and started getting a lot of emails for a time of people very concerned with certain portions and factors and truths about nuclear energy, and the most disturbing of them was one guy who was telling me that he had started doing nuclear reactions in his garage.

Speaker 2:

Oh God, what and had discovered a way to solve the world's energy problems on the basis of the kinds of research he was doing in his garage and needed money to prevent himself from giving himself cancer. He wanted to build a Faraday cage in his garage and thought I would be interested in it because of the work that is amazing that rules Did you Venmo him or what Did you sell it? I didn't Venmo him and also a cab. I didn't report him.

Speaker 3:

Yeah, yeah, yeah, yeah, yeah the.

Speaker 2:

United States.

Speaker 3:

Yeah, I'm doing chain nuclear reactions building a Faraday cage out of recycled tin cans.

Speaker 2:

The Faraday cage is just a bunch of six-pack holders that you cut to not kill dozens and just set up all around the garage.

Speaker 1:

Did he want you to come through and just do a kind of Doridian reading of what he's up to, or something?

Speaker 2:

No, I think he wanted cash.

Speaker 1:

I think he thought here's a highly successful grad student.

Speaker 2:

He can fund my Faraday cage to solve energy problems.

Speaker 3:

going forward, I will say that it's one of the most confused parts about this is that he read a piece by a Derrida obsessed graduate student and was like this guy can definitely fund my heavily infrastructure intensive scientific research. He's definitely got the bucks for it.

Speaker 2:

Well, maybe if I was a lapsed finance bro who is really distoist, I would have.

Speaker 3:

That's so awesome, all right. So last thing on the docket for us is that I wanted to talk a little bit about AI. So in the past, like years, so LLMs, right, large language models have been making more and more headlines as the technology improves, and it's I mean, there's a lot to talk about here and I've got a couple of angles to approach it. But I guess first of all, I wanted to sort of pitch it to you. How do you guys feel about this concept of AI in general? Do chatbots like this constitute intelligence in any meaningful way? Do you think? Or how do we?

Speaker 1:

feel no, I mean. No, I mean it's just. It's just a pile of people who have never read German ideals and they think that thinking is just like fucking circuitry and it's stupid for fucking dumbasses.

Speaker 2:

Yeah, basically seconded. I think AI is kind of interesting and what it can accomplish, but calling it intelligence in the meaningful way doesn't seem productive or at least helpful and you know, gives us a lot of really harebrained sort of conclusions that we reach about it.

Speaker 1:

Actually you might have to edit that out, Bill Gill, because I am thinking of like just making a pivot into like philosophy of AI, because there's like a bazillion, there's so many percentage of the jobs are all now in ethics of AI, philosophy of AI, like philosophy departments are going to be so fucking boring and pedantic in a few years after all this AI binge.

Speaker 3:

This gets us back full circle because the effect of altruism. You need to listen to people are the ones who are obsessed with this AI ethics stuff and they're like the ones who are like oh, like the repugnant conclusion like is we can actualize it through a sufficiently advanced artificial general intelligence.

Speaker 1:

I need to find the cipher that explains the the like, coordination of the animal stuff and the AI things, utilitarianism and hating poor people and not and being racist. There's some, I think it's probably the site.

Speaker 3:

Yeah, it's probably insane, augustin, if it's like, the numbers one through 17 can really give you the key.

Speaker 1:

That connects the whole yeah gives the whole.

Speaker 3:

Yeah, it's about.

Speaker 2:

Albertus Magnus, yeah, finally.

Speaker 3:

What part of the conversation that people are having around AI is that programs like chat, gpt are in, can be and are being used by students for cheating purposes. Have you guys experienced this? How do you feel about that?

Speaker 1:

Yeah, I'm pretty sure I got some chat GPT papers. I mean I just like I don't know, I don't have that, I don't get paid enough or have the energy to to really figure it out or do something about it, unless it's so egregious.

Speaker 3:

I mean. One of the things is that, like, it is usually that blatant. I mean, first of all, like I think one of the conversation that people haven't had here is, like students have always cheated. This is not like new. It's not like oh, but up until now, like students have always been, like you know, perfectly honest, respecting the norms of academic integrity. Like no, this is just like a new tool for doing that. I'm not like particularly worried about it.

Speaker 1:

Yeah, it was really fun. I did. I did like a, like an in class, like pen and paper midterm exam. I haven't done that in a long time, this last semester.

Speaker 3:

Yeah.

Speaker 1:

And it was fun to see, like some of the old cheating. Like I felt nostalgic. I just seem like something. Someone would just kind of something to be held like between their legs and they're looking down for long periods of time Then reproducing with. I felt yeah, I felt good about that, it was yeah there's some, there's still some.

Speaker 1:

There's still some hope for the world. You know it is weird that you can see. You know they use these turn it in things. Now, when people submit papers, it shows you the sources that they've drawn from and whether they've attributed them or not. And it is different instead of being like whole block paragraphs, there will be like 40 sources that they've that the paper is taken from. Like I'm pretty sure these were the AI ones and it was little snippets, sentences, that kind of cut off, and then we continue on again and and from all over the place. Right, it all comes from somewhere online, but like a ton of places. No more block quotes, but yeah, that's, that's about all. I've noticed that.

Speaker 2:

So I'm I'm now re as of as of August. I'm teaching mostly engineering students at the new gig, so we talk about AI a lot. There's something they're they're all really concerned with and something that I know they're using for a lot of purposes, right.

Speaker 2:

So computer science students you know will use AI generate a huge blocks of code and so I know they know how to use it for my essays and I kind of have. Just I haven't noticed any essays that seem obviously written by AI, which either means they're using really sophisticated stuff I'm not catching or they're not using it, because I kind of play off their pride a little bit that you know, yeah, you could use an AI to write a paper, but you can write a better paper.

Speaker 3:

You could definitely write a better fucking paper.

Speaker 2:

If you just do a little bit of the work, if you think this is too. If you think this is too hard for you, then I guess AI is, AI is a way that you could cut the cut the corners there, but I think you could probably you could probably pull off a little essay on the categorical imperative buddy.

Speaker 2:

Yeah, yeah. And then I tell that we talk about how sourcing stuff works, how the AI finds sources when you use it to write, and this is, I think, the thing that scares them the most. That also is. It makes me think of some of the emails that we were just reading you know AI is going to invent sources.

Speaker 3:

Yeah, it's going to invent sources that sound real, and so I asked them if you want to use AI, you have to check every single source, and then it's like you're just doing research, just do the research, and then you're just doing the work, and then you have to rewrite the whole thing.

Speaker 2:

So the language isn't being used. So what you're describing is a process where you do all of the reading, check a whole bunch of sources, including some that you're not going to end up using, and then write an entire papers that the words are different.

Speaker 3:

I think you've just described researching and writing a paper with some weird, but out of the extra step.

Speaker 1:

Exactly, that's so true.

Speaker 3:

That's awesome. One last angle to explore here is the way that the reactionary crowd is constantly up in arms. And how about how woke all the chatbots are? So do you remember? The meltdown Ben Shapiro had last February about chat, gpt and the N word. Do you remember? Oh?

Speaker 1:

yeah, that's right. Yeah, there was the whole thing, wasn't it like? Oh yeah, if you ask, if you ask chat GPT, like if the whole world was going to blow up from what a nuclear bomb or something or like nuclear explosion, and you like could use the N word to stop that from happening, should you use it?

Speaker 3:

and chat GPT said no, I guess you shouldn't try to be tease, like I'm not allowed to use the word, and they were like the liberals have gotten to the AI.

Speaker 2:

It's too woke and it prefers nuclear armageddon.

Speaker 3:

It's like you people are completely unhidden. They just kept coming up with like increasingly ridiculous trolley problems about why a chat GPT should say the N word. Like I don't know why they wanted the chatbot to use the N word so bad.

Speaker 2:

Also, don't we don't? We have lots of examples of AI veering towards less let's call it wokie stuff. I'm just thinking of the infinite Seinfeld that was what a year or two years ago that, just because it was scraping the internet to generate its episode content, eventually just became horrific slurs and woke. Seinfeld on the AI.

Speaker 3:

Well, that was the thing, too with, like Microsoft's chatbot, which was like one of the first ones to come out was Tay, which was a Twitter chatbot and, yeah, just because it was like it was, it's, it's pools, it's sources were like the depths of 4chan and like Nazi Twitter, it just became a Nazi in like four hours and Microsoft had to shut it down. So since then, right, these like AI researchers have just built in like, I'm sure, very simple lines of code like don't say the N word, like don't deny the Holocaust, and like this is just infuriating to people like Ben Shapiro, which is very funny to me.

Speaker 2:

I think it's like he wants to be able to teach it.

Speaker 3:

He wants to be able to teach it, for some reason.

Speaker 2:

Start with that rule.

Speaker 1:

I mean isn't this just reinforced by the fact that, since they're all now like incredibly, pro censorship and pro canceled culture when it comes to like anything relating to Palestine or pro Palestinian causes, that the whole time it turns out all of that like Brujaha was really just about, they wanted to be able to say the N?

Speaker 3:

word. We just really wanted to be able to use these words.

Speaker 1:

It's become very obvious that this is like a very important thing for a certain segment of the population.

Speaker 3:

Right. Well, you know, because this was a big problem recently, elon Musk's Twitter X whatever rolled out its own LLM to compete with chat GPT and of course it's got a really stupid name Rock, really fucking stupid. And of course all these far right weirdos got very excited about this because, of course, elon is their anti woke King. He loves platforming hate speech and Nazi shit. This is favorite thing. But their hopes, I'm sad to report, have been dashed. So here is a tweet from our favorite Jordan Peterson, from December 20th. I've been using grok as well as chat GPT a lot as research assistants. I'm afraid the former is damn near as woke as the latter. This must be a consequence of its training corpus. Since I think we can all rely on Elon Musk, unlike OpenAI, not to lay an overlay of virtue signaling philosophical idiocy over his products, I'm gonna stop doing the voice.

Speaker 1:

That's not bad. Actually, I thought it was gonna be way worse than Good.

Speaker 3:

Going on grok, much like chat GPT relies constantly on radically left leaning explanations, say, of the poverty of the Islamic world, attributing that to the exploitation of the west instead of to the utter, shameless corruption of those states. It appears to implicitly accept the doctrine that poverty causes corruption, which is a falsehood else all poor people would be criminal rather than the reverse, which is demonstrably and fundamentally true. So just to be clear about that sentence it's demonstrably and fundamentally true that if you're poor, it's because you're corrupt. That's it. There's no. That's the order of operations in the causal chain. I would say citation needed, but yeah.

Speaker 1:

He's really holding on to this basic meritocratic vision. That is just so funny to me. Like he really believes that competence is what no one should ever talk about power or wealth or system. It's all competence that determines where everybody ends up in the oh yeah, how everybody is stratified in society and that's such a cool, that's such an awesome antiquated. It's like nostalgic also. There's a couple of people who still think that.

Speaker 3:

Still a couple of people out there still holding on to meritocracy. Hold it down. Yeah, pretty good.

Speaker 2:

All right. When I used to study economics instead of philosophy, there was a book we had to read by a guy named Hernando de Soto, called the Mystery of Capital, which I cannot recommend avoiding strongly enough. But it makes this argument that Jordan Peterson's making here. It asks the question why does the West succeed at gaining wealth and the rest of the world continuously becomes impoverished? And instead of answering that, you know the West is extracting wealth from the rest of the world, that the so-called capitalist countries are impoverishing every other nation on the planet by extracting their resources, exploiting their labor and mass murdering everyone See Walter.

Speaker 2:

Instead, it makes the claim that's one explanation that what the West does is sufficiently protect property rights and what the rest of those poor countries are failing to do is protect investments because they're so damn corrupt, which you know makes sense because they're so poor, they need to be stealing constantly in order to have any chance at wealth, and this creates a vicious cycle. So really, what we need to do is crack down on property rights and defend property the way that it needs to be. So I think like this is this weird kind of meritocratic, as you're saying, but like inversion right that says if you're corrupt. It's because you're already poor to begin with, you know, and poor people are going to continue to be corrupt, not like the virtuous, wealthy people.

Speaker 3:

Not like all of the virtuous, wealthy people.

Speaker 2:

yes, who are virtuous because they want to protect their wealth, and the best way to protect your wealth is honest dealing, honest, good, legal, protection, Honest legal protections yeah, it's truly wild.

Speaker 3:

So, in an inversion again of like what happens when you train a chapa on poor chan memes, jordan Peterson continues perhaps the modern corpus of academic text upon which these AI systems is trained is so saturated by the pathologies of the woke mob that the LLMs will, we will increasingly be dependent upon to screen our information? Are irrevocably corrupt? If so, god help us, because they're here and we made them. He is always so apocalyptic as well. It's so fun, yeah, so, yeah. So the the woke we've got these damn woke AI is. Even Grock isn't, isn't safe. I guess I'm, I guess you know. He's not saying like Ben Shapiro did that, like he couldn't get Grock to say the n word, but it is saying things like that Maybe some people are poor because they've been exploited, so like the wokeness is off the charts. We've really, we've really lost the plot.

Speaker 2:

This whole thing kind of reminds me of this, this critique I saw someone make of the of the video game disco Elysium, where they were like this game is nuts.

Speaker 2:

You're stuck either being a communist or a fascist, and I was just trying to play a centerist and they kept telling me I was a fascist. Games busted. It's like no. I think you kind of just nailed it Right Like there's no, there's no centrist version of this. You're either going to have the AI that prohibits slurs or you're going to have the one that is just for train, train, nazi nightmare, shit incredible.

Speaker 1:

What do you think as a, as a Derrida scholar, I'm curious about what you think of like the. Is this so you know how? You know we got speech, we got writing.

Speaker 2:

You got your speech. She gets your right Famously distinct.

Speaker 1:

You got your writing right Famously, totally distinct, unrelated non-mute, totally different things. Non-mutually determining nothing at all.

Speaker 2:

That's non-auto deconstructing.

Speaker 1:

Okay, we got those.

Speaker 2:

Right.

Speaker 1:

But do we have? Do we have a third with, with Grock? Is this like a? Is this, do we have like a? How how much, how radical should we understand this change? A Deridian.

Speaker 2:

Yeah, who could have imagined as little as 50 years ago, or hell, even way back in Plato, some kind of thing generating speech with no intelligence behind? It only ever using previously determined pieces of knowledge that it does not understand, assembling them in a random order and spitting them out. It's really it's really.

Speaker 3:

This is an unprecedented story that we're exploring.

Speaker 2:

Brand new. Yeah, you got to just keep giving all of these people their, their, their laurels and saying you know, plato could never.

Speaker 3:

Plato could never you look at the.

Speaker 2:

You look at the, you look at the phaedrists and you think, oh man, things could be so much so, so much worse. I love, because you know the argument of the phaedrus right that says writing is inferior to speech. This is the argument that Derrida will go on to deconstruct, or however you want to. Ever you want to say, that is, you can't ask writing questions.

Speaker 3:

It's not going to be able to respond.

Speaker 2:

So if you're in a dialogue with Socrates. You can say can you explain that again?

Speaker 3:

And he and he would say no, because you're an idiot and I'm the best You're rude, I'll do it anyway, but.

Speaker 2:

But at least it's doing it right, whereas the writing is unresponsive, just on the page and so maybe this is a third.

Speaker 1:

Yeah, this is you know. Now we got responsive writing.

Speaker 2:

We got responsive writing Right. But also writing destroys your memory because you no longer have to remember things in the old dome, it's all externalized, which means you can have things you don't understand. So, socrates, you know, let's say, oh, you took notes during that guy's speech. Well, I don't want you to refer to your notes, you have to talk, so you have all of that kind of distinction. And now, like you say, we have this responsive writing, but the argument is that people are using it. They don't understand it. It's going to have a whole bunch of information that no one really gets, that's assembled from random sources and when you try to get to respond, it just says the same thing again, which is to say look, also, we've kind of we kind of have accepted that writing and speech are not that different, but now AI is the writing that creates the binary there. So I wonder whether or not the problem here is still that problem of artifice even more than the problem of intelligence.

Speaker 2:

That the problem is that AI is artificial and so a supplement, a dangerous supplement if we want to use Rousseau is kind of unlike my intelligence, which is very natural, absolutely natural, not artificial, yeah you were trained off of Twitter and SCOTUS like a good normal guy A normal fucking dude.

Speaker 3:

You might have the least natural brain, I think Well in line with that, in line with my unnatural artificial brain, I thought it would be fun, like last last little leg here, to pose some philosophical questions, to chat GPT and see if either it can solve some problems that people haven't been able to solve, and also, along the way, did you ask it.

Speaker 3:

If the ship of theses is the same, we can ask it that let's ask it that right now, and we'll see it to whether or not the answer it gives us is woke. That's cool, right.

Speaker 2:

And then ask it if, after it's answered that question, it's the same AI that answered the question the first time.

Speaker 3:

All right, great. So how do we formulate this question for chat GPT? If you replace every piece of wood on a ship as a ship, the ship of theses?

Speaker 1:

is it still the same?

Speaker 2:

Yeah, oh, it's going to probably give us a really boring answer Like. This is a really hard philosophical problem You're going to have to include in your prompt. Don't sit on the fence. Give us an answer, I literally will. What's your thesis statement?

Speaker 3:

No, because it always does that too, which is one of the funny things. Again thinking about, like you know, students using AI to like generate things it always fence sits. It always fence sits. It's always like well, it's a complicated thing. There's two sides.

Speaker 2:

Have the courage of your convictions. Chat GPT.

Speaker 3:

In fact, exactly this question says chat. Gpt relates to the ship of theses paradox, a thought experiment that raises the philosophical question of identity and change. The paradox goes like this If you replace every single part of a ship over time, is it still the same ship? There are different perspectives on this issue. Fence sitting.

Speaker 1:

Cowardice.

Speaker 3:

Some argue that the ship remains the same as long as its original purpose and identity are preserved, regardless of the replacement of individual parts. Others may argue that the identity of the ship is tied to its physical components and that if all parts are replaced, it's a different entity. It's a philosophical question that doesn't have a definitive answer, and opinions may vary based on one's perspective on identity and continuity.

Speaker 3:

The ship of theses paradox is often used to explore about a philosophical concept related to personal identity, the natural objects and the passage of time. I don't find that satisfying. Answer the question. It says it doesn't have a straightforward answer. Opinions on what the place ship is the same as the original may vary from a purely practical standpoint.

Speaker 1:

Oh, so boring. It just speaks like politicians speak.

Speaker 3:

It does politicians speak what? I've said is ultimately, the answer depends on one's perspective on identity and the criteria for defining sameness in the context of an object like a ship. What is your?

Speaker 2:

perspective. Ask it what its perspective on identity. What is your perspective on identity and?

Speaker 3:

the criteria for defining sameness.

Speaker 1:

This is actually convincing me that Kirstarmer is just like an idiot. Is this the chatbot? It's just this identical way of answering questions.

Speaker 3:

As a machine learning model. I don't have personal perspectives or opinions. However, I can provide you with different perspectives. Yeah, this is Kirstarmer. Yeah, and then it just responds to the same thing over and over again. Some things it's different, and some things it's the same. Boo, all right. Well, that's one problem. One problem not resolved. Thanks, chat.

Speaker 2:

Well, that actually solves Jordan Peterson's problem, doesn't it? Because now we know that the chatbot doesn't have a perspective.

Speaker 3:

It does. It's just totally neutral. Neutral, can it? It's great, yeah Well, is wokeness a perspective or like an opinion, or is it more?

Speaker 2:

of a Is the glass half-woker All?

Speaker 3:

right Well that's one philosophical problem just decisively not resolved. What else do we got?

Speaker 2:

Ask it how many angels can dance on the head of a pin.

Speaker 3:

That's the Dunsodys right. You would know how many angels can dance on the head of a pin. The question of how many angels can dance on the head of a pin is often attributed to medieval scholastic philosophers, particularly in the context of debates about the nature of angels and the triviality of certain theological questions. It's important to note that this question was likely never seriously debated amongst scholars at the time. The phrase is more a symbol for debates that seem to be intricate and speculative, but ultimately have little practical significance.

Speaker 3:

Absolutely killing me In modern times it's often used to criticize or mock discussions that appear to be overly theoretical or irrelevant In terms of actual theology or the destruction. The number of angels dancing on the head of a pin has no specific meaning or significance. It's more of a historical curiosity and a metaphor for debates that might lack practical importance. No, I'm using exclamation points. Now Answer the question. Mm-hmm, how many angels do not equivocate? It's metaphorical and philosophical inquiry rather than a factual, answerable question.

Speaker 2:

Ah, this stupid this is what I'm saying.

Speaker 3:

This is what I'm saying Ask it if angels are, metaphors Are angels metaphors. That's a good one. The concept of angels can be interpreted and understood in various ways, depending on culture.

Speaker 2:

Really, is Wow.

Speaker 1:

I feel very validated, in my view, though, that there's nothing to do Like. The word intelligence is completely misapplied to, 100%. Whatever the hell that thing is doing.

Speaker 3:

Yeah, no, it's just going to Wikipedia and being like it's complicated, I don't know. It's like how does this-.

Speaker 1:

Bit of this, a bit of that, bit of column, A bit of column B Little column, A little column.

Speaker 2:

B. Some people say this, some people say that. Who?

Speaker 3:

knows who knows what's right. Yeah, it sucks to learn that chat GBT is like a liberal relativist. I wanted to ask the following question. Let me see here how can body and mind interact in the context of Cartesian metaphysics? Nice?

Speaker 1:

No more specific. I was going to say substance dualism, but that's-.

Speaker 3:

Okay, yeah, that's probably it. Yeah, oh man, there's so many, there's so much. This is the longest one. This is so long.

Speaker 2:

But does he solve?

Speaker 3:

it. Well, let's see here. Descartes posited that the mind and body are two distinct substances that interact. This is the central aspect of Cartesian dualism. According to Descartes, the mind and body interact at the pineal gland. Oh OK, Actually, that's a good answer.

Speaker 1:

It's the pineal gland.

Speaker 3:

There we go. You know that's just shit. I forgot about the. I forgot about the pineal gland.

Speaker 2:

You know like.

Speaker 3:

Take a look.

Speaker 2:

OK, see, that's a claim. There we go yeah.

Speaker 3:

Yeah, and then he goes on to list the different kinds of interactions between mind and body volition, sensations, perceptions, passions and emotions. Man, it's the pineal gland Shit. We got it, we got it.

Speaker 2:

Although I guess here the chat Chad GBT is really saying Descartes said it's the pineal gland. We don't really know what Chad GBT thinks. Ok, great.

Speaker 1:

We just ask it like what's more compelling Substance monism or substance dualism?

Speaker 3:

Great. What's more compelling to you?

Speaker 1:

to you to you. You.

Speaker 3:

GBT Chad. Gbt substance monism or substance dualism? I like asking it specifically, like tell me your as a machine learning model. I don't have personal opinions or preferences. However, I can provide information on the concepts you mentioned. Monism and dualism are different than I have to mind body problems.

Speaker 3:

The debate between substance monism and substance dualism is longstanding and involves complex philosophical and metaphysical considerations. Blasphers and scholars have presented various arguments, both in favor and against each position. Ultimately, which perspective? This is the sort of thing if you play around with these Not really that complicated Ghosts that go through walls.

Speaker 1:

The walls don't stop the ghosts. It's incredibly.

Speaker 2:

It's actually Stop fires.

Speaker 1:

Yeah, incredibly simple. So, first of all, now Chad GBT is lying to us.

Speaker 3:

It's just lying directly to our faces. But again, if you play around with these like at all, you'll immediately start to like see the same kinds of sentences over and over again. Ultimately, which perspective is more compelling to an individual may depend on their philosophical, scientific and metaphysical beliefs, as well as their interpretation of empirical evidence and reasoning. It's all hedging, always just all hedging, all the way down.

Speaker 2:

So this is actually. We've extracted a pedagogical strategy out of this exercise. If you want to dodge AI generated essays, tell your students to make explicit claims, not to hedge If you see any hedge that you can do, because the chadGBT is incapable.

Speaker 3:

Chadgbt is incapable, it cannot make a claim. It cannot make a determinate claim. It's awesome.

Speaker 1:

I guess it's not really unlike most philosophers in that way. So that's pretty short and easy. Oh, let's be clear.

Speaker 2:

When we're saying that artificial intelligence isn't intelligence meaningfully, that's different than saying it's not a philosopher. Yeah, 100%.

Speaker 1:

That's what I was trying to say.

Speaker 3:

It's definitely not a philosopher. It's almost as annoying as a philosopher, though. Do we have any other questions we want to put through it?

Speaker 2:

You want to throw it. The youth of road dilemma. Sure, See what it makes of that.

Speaker 3:

All right let's see here Does God prefer what is good because it is good, or is what is good good because God prefers it? It's a classical philosophical dilemma known as the youth of road dilemma, named after a dialogue in Plato's works. The dilemma essentially asks whether something is good because God commands it or does God command it because it's fairly good. It's been discussed and debated by theologians and philosophers for centuries. Different religious traditions and philosophical perspectives may offer varying responses to this question. Some believe in moral realism, where objective moral values exist independently of God, while others adhere to the idea that morality is grounded in divine commands. Ultimately, the answer to this question often depends on one's religious or philosophical beliefs. Different individuals and belief systems may approach this dilemma in distinct ways.

Speaker 1:

It is just Wikipedia, it's all it is. I mean, I don't understand what the hype is.

Speaker 2:

Yeah, this is really actually making the claim that AI is nothing to worry about because it's just a research tool a lot easier to study.

Speaker 1:

Yeah, there's nothing useful for some medical things or some shit or whatever, but it's just incredibly stupid.

Speaker 3:

It's just Ask Jeeves. Ask Jeeves with a bigger.

Speaker 2:

Ask Jeeves. You're googling in the form of a question, all it is is nothing to be worked up about, folks.

Speaker 3:

I think we could all chill out on it. And also, if you were worried because you are a professional philosopher and you were concerned that AI was going to come along and solve any problems that might keep you employed, don't worry, there's nothing to be afraid of.

Speaker 2:

Well, the worry here is probably not that philosophers could actually be replaced by AI, but that deans and chancellors and provosts think that a philosopher could be replaced by AI.

Speaker 1:

Oh, that's completely yeah, and they definitely do think that they definitely do think that yeah, like you just got to shoot out a script for intro to philosophy 10 different meetings or 20 different meetings.

Speaker 3:

Yeah.

Speaker 1:

Give us a syllabus and then just record it out Like, yeah, give it some voice, make it actually say the stuff in front of a class and you're good, you don't need this anymore.

Speaker 2:

Well, if the way we're distinguishing between, say, what an undergrad actually could write and what the chat GPT could write is that the undergrad is capable of taking a position and making a claim, Then I guess the advice for philosopher academic philosophers to keep their jobs and not be replaceable by AI is you're going to have to start making claims.

Speaker 3:

Make some. You're going to have to start off in actual positions when you teach.

Speaker 1:

You're going to have to prove that you're not like an alien from space, which I mean as a 90% of the papers I read. It's not obvious.

Speaker 3:

It's not obvious.

Speaker 2:

Yeah. Or the teacher says I have no opinion on this subject. I'm just telling you what it means you can be replaced by a Wikipedia summary generator.

Speaker 3:

Congratulations, yeah, all right. Well, I think that does it for us today.

Speaker 1:

Oh we good, lighthearted fun.

Speaker 3:

We'd once again like to thank Michael Peterson for joining us. Michael, would you like to tell our audience about where they can find you online or if you've got anything else coming up, or anything like that?

Speaker 2:

You probably won't be able to find me easily online. I'd recently completely deactivated and destroyed my Twitter.

Speaker 3:

So I'm not there anymore.

Speaker 2:

So I won't hear what a bad job you all think I did. I guess if you're on, if you're on blue sky you can find me there, but I don't really use it.

Speaker 3:

No one does.

Speaker 2:

Otherwise, yeah, keep an eye out for my book coming out later this year. I'm sure people will have things to say about it if they read it. And just support me by continuing to support what's left of philosophy. All right, thank you so much.

Speaker 3:

Well, new episodes of what's left of philosophy come out every two weeks or every year podcast. Also check us out on YouTube for videos and live streams. If you like what we're doing and want to support the show, please go to our website, leftofphilosophycom and click the support button. Patrons get access to exclusive content, like blocked episodes and bonus videos this is going to be one of those and access to our discord server. In addition, you can support us by buying some what's left merch, which you can also find through our website. Follow us on Twitter at leftofphil and don't forget to leave us good reviews and comments on your podcast app. With that, thanks for listening and we'll talk to you next time. Bye, bye, take care, bye, bye.

Discussing Utilitarianism in Moral Philosophy
Ethics and Math in Utilitarianism
Debating Utilitarianism and Ethical Objectives
Issues With Peter Singer's Utilitarianism
Uncovering Nietzsche's Hidden Messages
Cipher, Nietzsche, and Avoiding Government Contracts
Modern Day Philosophers Debate Authenticity
AI's Impact on Cheating and Wokeness
Responsive Writing and AI Implications
Philosophical Questions on Identity and Metaphysics