China AI military use spurs latest US chip export controls, analysts say: Nikkei reports lessons from Ukraine and Israel show rapid escalation in speed and scale of combat.
ElevenLabs’ AI voice generation ‘very likely’ used in a Russian influence operation: TC reports one recent campaign was “very likely” helped by commercial AI voice generation products, including tech publicly released by the hot startup ElevenLabs, according to a recent report from Massachusetts-based threat intelligence company Recorded Future.
+ The report describes a Russian-tied campaign designed to undermine Europe’s support for Ukraine, dubbed “Operation Undercut,” that prominently used AI-generated voiceovers on fake or misleading “news” videos.
AI weapons and the dangerous illusion of human control: America must let autonomous systems operate more freely in war. Sebastian Elbaum + Jonathan Panter
FTC: President-elect Donald Trump tapped Federal Trade Commissioner Andrew Ferguson to lead the consumer protection and antitrust agency. Trump also selected Mark Meador, a former aide to Sen. Mike Lee (R-UT) to be confirmed as the third Republican on the FTC.
The criminal’s ‘go-to cryptocurrency’ has a new friend in the White House: Howard Lutnick has defended the stablecoin company which has been used by gangs and US adversaries. FT
How tech's right-wing elite made 'debanking' claims into a political rallying point: Highly influential figures in tech, including Elon Musk and Marc Andreessen, allege the cryptocurrency industry is a victim of banking discrimination. NBC News
New: Launch of ETO AGORA ( Emerging Technology Observatory - AI Governance and Regulatory Archive): A living collection of AI-relevant laws, regulations, standards, and other governance documents from the United States and worldwide. Updated regularly, AGORA includes summaries, document text, thematic tags, and filters to help you quickly discover and analyze key developments in AI governance. Access here.
AI thinks differently than people do. Here’s why that matters. Generative AI isn’t the strategic oracle many say it is. Like any other form of AI, it is a mirror that reflects patterns, trends, and decisions of the past. It cannot reliably break new ground or generate truly novel solutions given that it relies on pre-existing data and learned probabilities. Teppo Felin + Matthias Holweg
Microsoft’s Mustafa Suleyman hires ex-DeepMind staff for AI health unit: FT reports the rival companies are racing to create lucrative applications from cutting-edge technology.
What do the gods of generative AI have in store for 2025? OpenAI and Google have unveiled their next generation of products. Economist
Adobe fell in extended trading after giving a disappointing annual sales outlook, underscoring anxieties that the creative software company may lose business to emerging artificial intelligence-based startups.
Google introduced Gemini 2.0, which the company says is twice as fast as its predecessor and more powerful than the larger "pro" version of Gemini 1.5.
Google launched Gemini 2.0, its new AI model for practically everything: The Verge reports Gemini 2.0 can generate images and audio, is faster and cheaper to run, and is meant to make AI agents possible.
Google unveils AI agent that can use websites on its own: The experimental tool can browse spreadsheets, shopping sites and other services, before taking action on behalf of the computer user. NYT
Google rolls out faster Gemini AI model to power agents: Company expects AI assistants will follow its users around the web. Bloomberg
Google races to bring AI-powered ‘agents’ to consumers: FT reports the tech group unveils Gemini upgrades as it battles Apple and OpenAI in making practical AI assistants for the masses.
Google’s new AI projects aren’t ready for the masses yet. Good! Google DeepMind is showing off an AI assistant that sees and a Chrome extension that can browse the web on its own. They’re for testing purposes only—and that makes sense. FC
The GPT era is already ending The Atlantic
Their job is to push computers toward AI doom: It’s largely up to companies to determine whether their AI is capable of superhuman harm. At Anthropic, the Frontier Red Team looks for the danger zone. WSJ
Character dot ai is being sued for encouraging kids to self-harm: FC reports the lawsuit alleges that Character dot AI “poses a clear and present danger to American youth.”
Get ready for ‘long thinking,’ AI’s next leap forward: A new generation of AI models will take its time to reason, providing more reliable answers to increasingly complex questions. Steven Rosenbush
Enjoy the ride + plan accordingly.
-Marc