OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls.
Musk, who owns AI startup xAI, is suing OpenAI in courts in an effort to block OpenAI's conversion, which it had argued as a departure of the mission he funded the company on. The court is likely to rule on the preliminary injunction Musk's lawyers had applied for later this month.
Holding Off The next big thing in the world of artificial intelligence are so-called "AI agents" — models that are capable of interacting with their environment, like a computer desktop, allowing them to autonomously complete tasks without human intervention.
Indeed, Musk suggested that synthetic data — data generated by AI models themselves — is the path forward. “The only way to supplement [real-world data] is with synthetic data, where the AI creates [training data],” he said. “With synthetic data … [AI] will sort of grade itself and go through this process of self-learning.”
A new blog post by OpenAI CEO Sam Altman indicates that the AI firm knows how to build AGI as it shifts gears and focus to superintelligence.
Former Google LLC Chief Executive Officer Eric Schmidt is reportedly working on a new artificial intelligence startup that will compete in AI video generation and social media. Forbes reported that the startup Schmidt is working on is called Hooglee LLC and that Schmidt’s family office, Hillspire, is financing and housing it.
It’s a company that has become a household name, and a leader in its field. People are even recognizing the name Sam Altman, as representing the ‘Godfather of AI,’ as we move into this era where artificial intelligence has a greater and clearer impact on how we live and work.
Sam Altman teased that the AGI and superintelligence are coming to ChatGPT soon, but we don't even have the next big GPT-5 upgrade.
OpenAI on Friday outlined plans to revamp its structure, saying it would create a public benefit corporation to make it easier to "raise more capital than we'd imagined," and remove the restrictions imposed on the startup by its current nonprofit parent.
Red teaming has become the go-to technique for iteratively testing AI models to simulate diverse, lethal, unpredictable attacks.
Phi-4 and an rStar-Math paper suggest that compact, specialized models can provide powerful alternatives to the industry’s largest systems.
To demonstrate we are still not at human-level intelligence, Chollet notes some of the simple problems in ARC-AGI that o3 can't solve. One such problem involves simply moving a colored square by a given amount -- a pattern that quickly becomes clear to a human.