artificial-intelligence

OpenAI just dropped the Codex app for Mac

On February 2, 2026, OpenAI released a dedicated Codex desktop app for macOS (currently Apple Silicon only). (OpenAI)

Introducing the Codex app

This update is a big deal not because “AI can write code” (that ship sailed), but because OpenAI is productizing the agent supervisor workflow — the boring, operational parts that determine whether agentic coding is usable in real teams. (TechCrunch)

What shipped (facts, not vibes)

1) A real multi-agent workspace (threads + projects)

The app organizes work into separate threads by project, so you can run multiple tasks simultaneously and switch context cleanly. (OpenAI)

2) Built-in worktrees (so parallel work doesn’t trash your repo)

Worktrees let agents operate on isolated working copies/branches, reducing conflicts and making review predictable. OpenAI highlights this directly as a first-class feature of the app. (OpenAI)

3) Skills (tool-aware workflows) — including Figma, Linear, and cloud deploy

OpenAI is bundling and promoting Skills as the “make agents actually useful” layer: packaged instructions/resources/scripts that standardize repeatable workflows. (OpenAI)

Tweet from OpenAI

Concrete examples OpenAI lists:

  • Implement designs from Figma → fetch design context/assets/screenshots and translate into UI code. (OpenAI)
  • Manage project work in Linear (triage, releases, workload). (OpenAI)
  • Deploy to cloud hosts like Vercel, plus Cloudflare, Netlify, Render. (OpenAI)

There’s also an official public “explore” gallery showing example commands like $figma-implement-design and $vercel-deploy. (OpenAI Developers)

4) Automations (scheduled agent runs with a review queue)

The app supports scheduled recurring runs (think: “summarize CI failures daily” or “draft release notes weekly”), with results landing in an inbox/review flow. (OpenAI Developers)

5) Access + rate limits (important operational detail)

OpenAI says Codex is included in ChatGPT Plus/Pro/Business/Enterprise/Edu — and temporarily available to Free + Go, while paid plan rate limits are doubled “everywhere you use Codex” (app/CLI/IDE/cloud). (OpenAI)

Evidence: this is not just a “new UI,” it’s OpenAI trying to win the coding race

Reuters frames the launch as a competitive move: OpenAI wants momentum (and customers) in AI coding, where rivals have been strong. It highlights that the app is meant to let users manage multiple agents over longer periods, not just autocomplete. (Reuters)

OpenAI leadership is also publicly “all-in” on the direction. Greg Brockman posted that after using the Codex app, “going back to the terminal has felt like going back in time,” calling it an “agent-native interface for building.” (Techmeme)

Integrations, with receipts

Linear: you can delegate work from issues

Linear’s own integration page describes workflow like: assign/mention @Codex in an issue, Codex spins up a cloud agent, posts updates, and links back to a reviewable result/PR flow. (Linear)

Vercel: OpenAI’s official skill library includes deploy helpers

OpenAI’s public skills repository includes a “vercel-deploy” curated skill (with scripts and a defined workflow). (GitHub)

“Built-in safeguards” — what that really means

OpenAI’s own positioning is: agents should work, but you should be able to supervise safely.

In practice, the safeguards are:

  • Isolation (worktrees; separate threads)
  • Review-first (diffs inside the thread)
  • Configurable guardrails (project guidance, environments, and permissions depending on how you run Codex)

Two implementation details that matter in real teams:

1) AGENTS.md as a repeatability + safety primitive

OpenAI documents that Codex reads AGENTS.md before doing work, letting you define expectations and commands. (OpenAI Developers) This isn’t just “nice documentation”—there’s emerging research suggesting AGENTS.md can reduce runtime and token use while keeping task completion comparable (study across multiple repos/PRs). (arXiv)

2) Cloud environments as a control plane

For cloud tasks, OpenAI documents environments as the way to control dependencies/tools/variables during execution. (OpenAI Developers)

Real-world developer feedback: speed + UI praise, but resource usage complaints are real

Your summary mentioned “high CPU use.” That’s not imaginary — there are open GitHub issues reporting:

  • codex app-server hitting ~100% CPU on Apple Silicon when opening the IDE panel, with slow init and stuck prompts. (GitHub)
  • CPU spikes reported at 300–350% in some macOS scenarios (extension-related reports). (GitHub)
  • A newer report claiming unexpectedly high GPU usage (70–90%) on macOS. (GitHub)

Also important: OpenAI’s own changelog notes they reduced high CPU usage in some collaboration flows by removing busy-waiting on subagents — so they’re actively patching performance regressions. (OpenAI Developers)

Practical takeaway: treat this like a v1 desktop agent product — powerful, but you should monitor resource use and keep the app/extension updated.

How I am using Codex app

Good tasks:

  • “Refactor X module but preserve public API”
  • “Add tests for Y behavior + fix flaky tests”
  • “Implement this Figma screen (component library stays consistent)”
  • “Draft release notes + summarize merged PRs”

Why: these map well to isolated diffs + review workflows. (OpenAI)

Make a repo-level contract with AGENTS.md

Example starter:

# AGENTS.md
## Commands
- Install: pnpm install
- Lint: pnpm lint
- Test: pnpm test
- Typecheck: pnpm typecheck
## Rules
- Prefer minimal diffs.
- Do not change public APIs without asking.
- Add/adjust tests for every behavior change.
- Never read or print secrets from .env.

This aligns directly with OpenAI’s documented behavior for AGENTS.md. (OpenAI Developers)

Delegate from Linear when you want “issue → PR” automation

If your team lives in Linear, the integration model (assign/mention Codex → updates → review link) is exactly the “agent supervisor” workflow most teams actually need. (Linear)

Bottom line

The Codex Mac app is OpenAI turning agentic coding into a managed system: parallel threads, isolation via worktrees, reusable tool-aware skills, and scheduled automations — plus docs that strongly suggest they expect teams to adopt repo-level “agent contracts” like AGENTS.md. (OpenAI)

It’s early and there are legitimate performance complaints, but the direction is clear: the future isn’t one AI assistant in your editor — it’s you supervising a small fleet of agents with a review queue. (Reuters)