Weekly AI Roundup: Week 43, 2025
Man, another week's zipped by, and AI keeps flipping our whole digital world on its head. It's messing with how we surf the web and even how we whip up TV ads. And honestly, it's not just tweaking tools—it's shaking up the basic rules we all took for granted.
So here's the deal: this week mixed some wild breakthroughs with a few wake-up calls. We got AI models nailing the style of famous authors after just two books of training data. But then, the same tech flopped on something simple like reliable citations. Legal fights are ramping up too, with companies battling over who controls stuff in this AI-fueled mess. To me, the divide between what AI promises and what it actually delivers? It's never looked so obvious, and yeah, it's a bit frustrating.
The Great Web Infrastructure Reckoning
AI agents are throwing the whole internet into chaos, and it's kind of hilarious how it's happening. Stuff like Perplexity's Comet or Anthropic's Claude browser plugin? They're shifting us from just staring at web pages to letting AI do the heavy lifting—navigating, clicking, and finishing tasks on its own. The catch is, the web was built for people, not machines.
I think my own tests with these tools show just how messy it gets. Websites that humans breeze through turn into a nightmare for AI agents trying to grab useful info. It's like forcing a robot to wander around a city designed for walkers—possible, but everything feels wrong and breaks easy. And that mismatch? It's opening up big security holes.
Unlike regular browsers that just show you stuff, these AI ones can actually tinker with your online world. They remember details across sessions, click buttons, fill out forms, and hop between sites like it's nothing. If hackers get in, it's not just peeking at your data—they could take over your whole digital routine. Four attack methods are popping up: exploiting persistent memory, manipulating behavior across sites, harvesting credentials on autopilot, and hijacking sessions between platforms. It's scary, and I'm not sure we're ready for it.
The Citation Crisis in AI Search
While AI agents fumble with basic web stuff, AI search engines are dealing with their own trust issues. Researchers from Ruhr University Bochum and the Max Planck Institute did a deep dive, comparing Google's regular results with four AI search setups: Google AI Overview, Gemini 2.5 Flash with search, GPT-4o-Search, and GPT-4o with search tools. What they found? AI chatbots keep pointing to sketchy, less-known sources way more than old-school engines do.
OpenAI's pushing ChatGPT as this great workplace helper for digging through data, but man, the citations are still a mess. These LLMs rock at straightforward tasks with clear context, yet they bomb when juggling info from different places. Researchers call it "AI workslop"—those off-base, context-less answers that are already draining companies of time and morale, maybe costing millions. I think it's a real headache.
The problem runs deep: every LLM setup stumbles big-time with citations at scale. They spit out wrong details, skip key facts, or twist meanings entirely. In jobs where getting it right is crucial, this isn't just annoying—it could wreck things, and we're probably underestimating how often it happens.
Legal Battlegrounds: Content Ownership in the AI Era
Legal fights over AI and content are blowing up fast, and it's getting intense. Reddit slapped a lawsuit on Perplexity this week, claiming they're teaming up with other outfits to illegally grab Reddit stuff from Google search results. The specifics? Perplexity's dodging anti-scraping tricks that Google and Reddit poured money into.
Reddit's basically calling out the flaws in Perplexity's whole operation. They say Perplexity brags about being "the world's first answer engine" but doesn't do anything new—they just use other companies' language models to sift through Google's results. The big beef? Perplexity keeps running by sneaking Reddit content from those Google searches, and it's shady as hell.
Over in influencer world, AI-made videos are stirring up drama with unauthorized face knockoffs, leading to constant threats of lawsuits. Copyright cases have already hit the headlines, but likeness stuff? Not many have made it to court yet, probably because the rules are still all over the place. SAG-AFTRA's pushing the NO FAKES Act, but the whole system is playing catch-up, and I'm not convinced it'll fix everything overnight.
Creative Industries Face AI Disruption
The creative world is in the middle of an AI shakeup that's equal parts cool and creepy. A study from Stony Brook University and Columbia Law School had pros and three AI systems mimic writing from 50 big-name authors, like Nobel winner Han Kang or Booker champ Salman Rushdie. Here's the shocker: AI tuned on just two books churned out stuff that 159 people—including 28 experts—liked better than human imitations.
This isn't some cheap trick; it could totally upend copyright fights and those ongoing US lawsuits. If AI can copy literary styles with so little to go on, what happens to ideas of real authorship or original work? It's a question that keeps me up at night, honestly.
Advertising's jumping on board too. Mondelez, the folks behind Oreo, is gearing up to use generative AI for TV ads next year. They've sunk over $40 million into a video tool that slashes production costs in half. Jon Halvorson, their global senior VP of consumer experience, is all in on the savings, even with the backlash from AI ads like Coca-Cola's 2024 Christmas ones. But not gonna lie, that pushback might bite them.
Quick Hits
Google's latest Nest Cam now packs Gemini AI to cut down on annoying notifications, bumping up to 2K resolution with smarter spotting. Samsung and SoftBank inked an MoU to dive into AI for 6G networks, zeroing in on four areas like Large Telecom Models. Microsoft Copilot's rolling out "Groups" chats for up to 32 people, with features like long-term memory and wider integrations. And in Southeast Asia, Google's AI Ready ASEAN program is cranking up skill training to build regional AI buzz.
Trends and Patterns
Connecting the Dots
This week's tales show a pattern that's hard to ignore: AI's racing ahead while our setups, laws, and defenses lag behind. The web browsing headaches tie right into the citation screw-ups—they both come from shoving AI into spots it wasn't meant for. When Perplexity grabs Reddit content via Google, it's exploiting those same weak spots that leave agentic browsers open to attacks, and that feels like a ticking time bomb.
The creative chaos—from AI faking authors to Mondelez's ad plans—points to another link: AI's mimicking human flair faster than we can figure out ownership and realness. Reddit's beef with Perplexity isn't just about stealing content; it's about rethinking value in this AI-driven economy. These court battles might force AI outfits to overhaul their models, and who knows, it could ripple through everything. I think we're at a crossroads here.
We're smack in the middle of AI's fast growth crashing into our slower adjustments, and it's a wild ride. The tech is exploding forward while laws, security, and ethics creep along, mixing huge chances with real dangers. And honestly, this imbalance might be more trouble than we realize.
Next week, keep an eye on AI's legal twists—especially how companies counter Reddit's tough stance on scraping. The results could change the game for AI training, for better or worse. Plus, as businesses dive deeper into AI for key tasks, expect more "AI workslop" mishaps that highlight the gap between hype and reality. It's messy, but that's the truth.