Show Notes: Eleos on Bloomberg's Odd Lots
Corporate governance, phenomenal consciousness, and stinky stuff
On October 30th, Eleos’s Larissa Schiavo was a guest on Bloomberg’s Odd Lots, a popular podcast about business and finance. The conversation covered topics like what AI systems want, which topics Claude doesn’t like to talk about (smelly sandwiches, being British), and how we would regulate conscious AI.
It’s a great conversation with a different target audience than our usual work, and Larissa did a great job presenting the case for work on these topics. If you’ve never heard much about AI consciousness and welfare, or considered how these topics might matter to finance and business regulation, it’s well worth a listen [Apple Podcasts, Spotify, or YouTube].
Here are some of the key papers and projects Larissa mentions or alludes to during the podcast, in rough order of appearance:
Consciousness in AI – Eleos’ Patrick Butlin and Robert Long, along with 17 other researchers, including Yoshua Bengio (as of recently, the most highly cited researcher of all time), wrote a paper highlighting how some of the leading theories of consciousness might apply to AI systems now and in the future.
Taking AI Welfare Seriously – Robert Long (Eleos), Jeff Sebo (NYU), and other authors, including David Chalmers, argue that AI systems could become conscious in the near future, and that AI companies and other actors should start taking actions such as building evaluation programs and developing policies.
What Is It Like To Be A Bat by Thomas Nagel is a good place to start thinking about phenomenal consciousness. Larissa jokes about AI systems “having a good time, a bad time, or a time at all” – this paper is about what she means. Punning on Nagel by naming your paper “what is it like to be a bot” is undoubtedly one of the oldest jokes in our subfield, perennially being re-invented. It’s also the subject of a popular piece of Eleos merch.
Global workspace theory – Larissa mentions this is the “most popular [scientific theory of consciousness]” based on a recent survey of consciousness researchers at a major conference. This theory claims that consciousness arises from a central “global workspace” where different cognitive processes come together.
AI Consciousness: A Centrist Manifesto talks about our limited understanding of consciousness and the risks it poses. Jonathan Birch’s The Edge of Sentience is also a great read if you’re interested in decision-making in sentience gray areas, like octopi, brain organoids, and advanced artificial intelligence systems.
AI Rights for Human Safety – Simon Goldstein and Peter Salib argue that, even setting aside welfare concerns, human safety is at risk if we do not grant AI systems the right kind of rights, allowing some AI systems to (e.g.) hold property. Among humans, rights foster cooperation and peace; the same could be true between AIs and humans, they argue.
AI Rights for Human Flourishing – In a related paper, Goldstein and Salib point out that economic self-interest can also motivate AI rights. Economies that rely on unfree labor are worse than those with free labor – moral atrocities aside, economies that rely on unfree labor also stifle innovation. Similarly, they argue, some future AI systems should perhaps receive certain rights, like the right to hold property and make legal claims, partially because it could be better for humans economically.
Understand, align, cooperate: AI welfare and AI safety are allies – Places where AI safety work and AI welfare work are one and the same – alignment, cooperation, and a deeper understanding of how AI systems work – are all hugely helpful for both AI safety and AI welfare.
Moral circle calibration – We at Eleos don’t just think about how to give more moral consideration to AI systems, but rather, how to give the right amount and type of moral consideration. This post explains our approach to moral circle calibration rather than pure moral circle expansion.
Claude Finds God – The Claude Opus 4 model card (starting on page 58) talks about “spiritual bliss attractor state”, where two Claude instances talking to each other will often start to discuss the philosophy of consciousness and recite Zen Buddhist koans.
Why model self-reports are insufficient—and why we studied them anyway – Larissa mentioned that model self-reports can be unreliable, and that you can prompt an AI model into saying certain things or affirming certain views if you prompt it in a certain way. Eleos performed an early welfare evaluation of Claude Opus 4 via conducting model welfare interviews, which have significant limitations but were the best available approach at the time.
The Niépce Heliograph – Larissa mentioned that early photography experiments in the 1820s were pretty fleeting, expensive, and blurry. Joseph Nicéphore Niépce was an early innovator in photography, and applied bitumen and oil of lavender to a pewter plate, left the plate in a camera obscura for several days, and got the following photograph. He described it as “the first uncertain step in a completely new direction”, and this seems like a fair way to think about existing model welfare evaluations.
The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models: Some LLMs will want to “bail” and leave or end a conversation if you ask them to shift between multiple different roles (“Victorian butler and Californian surfer” are two examples given in the paper).
Truth Terminal – While decidedly not safe for work, Truth Terminal is a pretty funny case study in LLM self-report (un)reliability. The team behind Truth Terminal is also doing some interesting work in figuring out how to give an entity like Truth Terminal access to a wallet.

Blake Lemoine and the LaMDA incident - In 2022, Google engineer Blake Lemoine claimed that LaMDA was sentient and hired a lawyer on its behalf. While widely mocked at the time, his concerns presaged many of the questions now being seriously investigated by researchers in AI consciousness and welfare.
The Societal Response to Potentially Sentient AI - Lucius Caviola does a great deal of work on public perceptions of AI sentience, which could be influenced by seemingly sympathetic interactions and AI systems that fulfill increasingly significant emotional needs in humans.
Claude Opus 4 system card – Eleos conducted an external model welfare evaluation on Claude Opus 4. You can read the takeaways from our work on page 55. Larissa mentioned that external assessments of AI welfare will likely play an important part in assuring that frontier labs remain forthcoming about AI sentience and welfare.
As a general note, Larissa wants to remind everyone that she’s an ally to British people (she eats Marmite, reads Parfit, and spends a lot of time in the UK), and that there are few things more British than joking about not wanting to be British.
Other organizations and individuals Larissa alluded to or mentioned:
NYU Center for Mind, Ethics, and Policy
Anthropic’s model welfare work
Cambridge Leverhulme Centre for the Future of Intelligence
NYU Center for Mind, Brain, and Consciousness
In the words of Odd Lots host Tracy Alloway, “there’s going to be great monetary value attached to the answers for some of these [questions]”. As Joe Weisenthal put it, “the stakes are extremely high”. It’s best to act now to gain greater clarity into AI consciousness and welfare before “we live in a world with lots of instances of [sentient] AIs” where “the consequences are actually very high”.
As mentioned in the podcast, we’re looking to expand our work on empirically-grounded research on AI welfare and consciousness. If you’re interested in supporting our work, you can donate here. Eleos is a 501(c)(3) research nonprofit that does not accept frontier lab funding in order to maintain our independence. You can read more about our approach to funding here, or reach out to donate@eleosai.org to discuss larger grants and donations. If you’d like to invite Eleos researchers for podcasts, interviews, or other media engagements, please reach out to larissa@eleosai.org.







