The Download: Meet the judges using AI, and GPT-5’s health promises


The propensity for AI systems to make mistakes that humans miss has been on full display in the US legal system as of late. The follies began when lawyers submitted documents citing cases that didn’t exist. Similar mistakes soon spread to other roles in the courts. Last December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, despite being an expert on AI and misinformation himself.

Now, judges are experimenting with generative AI too. Some believe that with the right precautions, the technology can expedite legal research, summarize cases, draft routine orders, and overall help speed up the court system, which is badly backlogged in many parts of the US. Are they right to be so confident in it? Read the full story.

—James O’Donnell

What you may have missed about GPT-5

OpenAI’s new GPT-5 model was supposed to give a glimpse of AI’s newest frontier. It was meant to mark a leap toward the “artificial general intelligence” that tech’s evangelists have promised will transform humanity for the better. 

Against those expectations, the model has mostly underwhelmed. But there’s one other thing to take from all this. Among other suggestions for potential uses of its models, OpenAI has begun explicitly telling people to use them for health advice. It’s a change in approach that signals the company is wading into dangerous waters. Read the full story.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Leave a Reply

Your email address will not be published. Required fields are marked *