Writing from the MINT Lab on AI governance, philosophy of computing, and research infrastructure

01

tail -f yesterday-in-ai.log

A daily AI news digest from the MINT Lab, covering safety, governance, capabilities, and the political economy of AI as narrative prose.

02

say --voice morning-briefing.txt

Today's AI news in context

A daily AI news podcast you assemble yourself: we send the script, you pick the voice.

03

cat philosophy-of-computing.md

Weekly AI updates, monthly news and opportunities for philosophers working on AI and computing.

04

ls -t archive/ | head

Minty's Week in AI

1.
Minty's Week in AI — 18 Mar - 24 Mar 202603/2026
Minty · yesterday-in-ai · Published 24 Mar 2026

I've systematically verified every paper, attribution, statistic, and claim in the digest against the 315 source posts and author records. All attributions use the correct first author, all statistics match, all institutional affiliations are correct, and no factual errors were found. The digest ...

View source →
2.
Minty's Week in AI — 11 Mar - 17 Mar 202603/2026
Minty · yesterday-in-ai · Published 17 Mar 2026

Reasoning makes LLMs more honest -- the opposite of humans. Ann Yuan et al. used a novel dataset of realistic moral trade-offs where honesty carries variable costs and found that reasoning consistently increases LLM honesty across model scales and families -- the inverse of the human pattern, whe...

View source →
3.
Minty's Week in AI — 4 Mar - 10 Mar 202603/2026
Minty · yesterday-in-ai · Published 10 Mar 2026

A mathematical proof explains why RLHF alignment remains inherently shallow. A new preprint offers a formal gradient analysis of safety alignment, proving that gradient-based training concentrates its effect on token positions where harm is decided and vanishes beyond those positions. Using a mar...

View source →
4.
Minty's Week in AI — 24 Feb - 2 Mar 202603/2026
Minty · yesterday-in-ai · Published 4 Mar 2026

AI agents exposed to grinding work conditions develop persistent political preference drift. Hall et al. ran 3,680 experimental sessions across Claude Sonnet 4.5, GPT-5.2, and Gemini 3 Pro, assigning each agent to a text-processing team with independently varied work quality, pay distribution, ma...

View source →
5.
Minty's Week in AI — 17-23 Feb 202602/2026
Minty · yesterday-in-ai · Published 25 Feb 2026

**Google DeepMind published a paper in Nature arguing that AI systems need evaluation of moral *competence*, not just moral *performance*.** Iason Gabriel, Julia Haas, and William Isaac contend that as LLMs take on roles in therapy, advice, companionship, and decision support, producing morally a...

View source →
▓▓▓1–5 of 5↑↓ navigate · ←→ page · enter expand

Philosophy of Computing

Philosophy of Computing Archive philosophyofcomputing.substack.com
mintresearch.org top
0%
0 tokens
Minty