WhoKnows.
← All briefings
CAREERACTIONTECHMONEY 3 stories

Daily Briefing — April 24, 2026


01

OpenAI releases GPT-5.5, a more powerful engine for coding, science, and general work

Fast Company Tech →
Tech shifts + Career & skills

OpenAI dropped GPT-5.5 on Thursday, and the headline number is an 82.7% score on Terminal-Bench 2.0, which tests whether a model can handle complex command-line workflows that require actual planning and tool coordination. For context, GPT-5.4 scored 75.1%, Anthropic's Opus 4.7 came in at 69.4%, and Google's Gemini 3.1 Pro landed at 68.5%. OpenAI is pulling noticeably ahead on agentic capability, at least by this particular benchmark.

The bigger story here is what GPT-5.5 is being pointed at. It is the new engine behind Codex, OpenAI's coding agent, which now has around 4 million developers using it weekly. But OpenAI is framing this as much broader than just coding. They are talking about "general digital work tasks," scientific hypothesis generation, and autonomous multistep work done without human guidance. That last part is the one worth sitting with.

Greg Brockman used the word "enable" carefully during the press call, and that framing matters. Is it just a better chatbot? No, it is infrastructure for agents that can operate a computer independently, with GPT-5.5 scoring 78.7% on OSWorld-Verified, the benchmark that measures exactly that. Anthropic's not-yet-released Mythos model is reportedly finished, which means the competitive pressure behind this release is real.

SO WHAT

If your job involves any kind of technical workflow, research process, or repetitive digital work, the tools that are about to land on your desk are going to be meaningfully more capable than what you used six months ago. My main AI assist is Claude Code, but I will spend 30 minutes this week actually testing Codex on a real task from my current workload, will report back later.


02

Meta cuts 10% of jobs, or 8,000 employees

Hacker News →
Career & skills + Money & markets

Yet another job cuts.

Meta is cutting 10% of its workforce, which works out to around 8,000 people. On top of that, the company is walking away from 6,000 open roles it was planning to fill. The cuts kick off May 20, per an internal memo that Bloomberg got its hands on. So if you know anyone at Meta, this is not a great week for them.

The official line from Meta's chief people officer is that this is about efficiency and "offsetting other investments." Read that last part carefully. Meta is spending enormous amounts on AI infrastructure, data centers, and the kind of long horizon bets that do not pay off next quarter. Someone has to fund that, and right now the answer is headcount reduction across teams that leadership has decided are no longer the priority.

And the 6,000 unfilled roles disappearing entirely. That is a deliberate restructuring of what the company thinks it needs.

SO WHAT

When a company that size closes off that many openings in one move, it tells you something about where the whole industry is recalibrating. Fewer generalist roles, more targeted bets on specific technical capabilities.


03

Anthropic traces Claude Code quality complaints to three separate bugs, all now fixed

Hacker News →
Tech shifts

Anthropic published a postmortem on Wednesday explaining why some Claude Code users felt the tool had gotten worse over the past month. The answer turned out to be three separate bugs affecting Claude Code, the Claude Agent SDK, and Claude Cowork. Notably, the API itself was never impacted, which means if you were using Claude through the API directly, you would not have noticed anything. The issues were all resolved as of April 20 in version 2.1.116.

What makes this worth reading is less the bugs themselves and more the fact that Anthropic published a detailed engineering postmortem at all. In a market where every AI company is racing to ship faster, publicly walking through what broke, why it broke, and what systemic changes they are making to prevent recurrence is a deliberate trust signal. It is also an implicit acknowledgment that the "vibes-based" quality complaints that circulate on social media and Hacker News are something they now have to take seriously as a product signal, not just noise.

The broader context here is that AI tooling is moving from "impressive demo" to "daily production dependency" for a growing number of developers. When your coding agent degrades, it is not an interesting benchmark debate. It is a workflow disruption. The companies that build durable trust through transparency when things go wrong will hold users longer than the ones that just chase the next benchmark number.

SO WHAT

If you rely on any AI coding tool daily, you should treat version pinning and changelog monitoring the same way you treat any other critical dependency in your stack. Check which version of Claude Code (or whatever AI coding tool you use) you are currently running, and set up a way to track release notes so you are not discovering regressions through vibes alone.