They expanded their leaked-credential detection pipeline to cover the keys we reported, thereby proactively protecting real Google customers from threat actors exploiting their Gemini API keys. They also committed to fixing the root cause, though we haven't seen a concrete outcome yet.
FT Videos & Podcasts
。新收录的资料对此有专业解读
Something similar is happening with AI agents. The bottleneck isn't model capability or compute. It's context. Models are smart enough. They're just forgetful. And filesystems, for all their simplicity, are an incredibly effective way to manage persistent context at the exact point where the agent runs — on the developer's machine, in their environment, with their data already there.,详情可参考PDF资料
Further, OpenAI will now notify authorities if it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t reveal “a target, means, and timing of planned violence.” O’Leary explained that if the new rules had been in effect when the shooter’s account was banned in 2025, the company would have notified the police. OpenAI will also establish a point of contact for Canadian law enforcement so it can quickly share information with authorities when needed.
——“努力创造无愧于时代的新业绩”