Posts by Month
Tags

Entries in AI (1)

Sunday
Jan112026

AI Is Everywhere Now - Fake Feeds - Lost Jobs and the TERMINATOR Question

I went out for what’s usually a simple walk-and-talk… except I was riding my VESC MagWheel OneWheel setup instead since it's a lot of fun and way more interesting to watch. Tight corners, a bit of construction, people and dogs coming the other way, one of those rides where you’re paying attention to everything at once. which kind of leads into this discussion... this is exactly what it feels like trying to live online right now too.

Everything is moving fast. Everything is noisy. And now, whether you like it or not, AI is in the middle of it.

In this ride video, I wanted to cover three basic topics without going off the rails:

  1. how AI is changing what we’re consuming (videos, photos, “viral” clips)

  2. how AI is going to affect jobs and the economy

  3. and the big one: are we walking toward a real “Terminator-style” scenario where AI becomes something we can’t control?

I’m not an AI expert. I’m just paying attention, asking questions, and trying to think through the logic of where this goes.

1) AI content is taking over the feed - and it’s getting ridiculous:

You’ve probably noticed it too. You start watching a video and within 10 seconds you realize: this isn’t real. The voices are off. The movements are weird. The lighting doesn’t make sense. Or it’s something that would be so incredible in real life… that it almost has to be fake.

That’s the part that bugs me. Not because I hate technology, but because it’s turning the “real world” feed into a fiction feed. If I want fiction, I’ll watch a movie or read a book. But when I’m doomscrolling (and yes, we all do it sometimes), I want to see real things, real people, real events, real moments. Not some AI-generated clip designed to trigger a reaction and keep me watching.

There’s also a deeper issue: the more people watch AI content, the more the algorithm feeds it to everyone. The platforms don’t care if it’s real; they care if it performs. If the metric says “people watched,” then the platform learns: “Give them more of that.”

So if you’re like me and you don’t want your feed turning into an AI theme park, the only real weapon you have is your attention.

  • The moment you spot obvious AI, scroll away.

  • Use the “don’t recommend” or “not interested” options when you can.

  • Stop rewarding fake content with your watch time.

I honestly hope platforms eventually give us a setting: No AI-generated video. No AI-generated photos. A filter. An option. Something. Because right now it’s blending into everything, and the average person has to waste mental energy just figuring out what’s real.

And that’s not a small thing. A society that can’t tell real from fake is a society that’s easy to manipulate.

2) Jobs, money, and purpose: what happens when AI eats the desk work?

This part is where it stops being annoying and starts being serious.

AI is already replacing chunks of jobs, especially anything that looks like:

  • writing

  • editing

  • basic design work

  • customer support

  • admin tasks

  • data entry

  • scheduling

  • entry-level programming and web tasks

And it’s not because the AI is perfect. It’s because it’s “good enough,” fast, and cheap.

A lot of desk jobs are basically information work: take input, process it, output something useful. That’s exactly what AI is designed to do. Even fields you’d think were untouchable like medicine are already being reshaped. Not necessarily replacing doctors entirely, but doing screening, analysis, triage, documentation, pattern recognition… and then a smaller number of human professionals supervise.

That could be a good thing in places that have shortages. But zoom out and ask the bigger question: what happens when the scale gets extreme?

If we get to a world where 80–90% of traditional “information jobs” disappear or shrink dramatically, you run into a math problem:

  • People need income to buy products and pay bills.

  • Governments can’t “tax” money that people don’t earn.

  • Companies can’t keep selling products if consumers can’t afford anything.

So where does the money come from?

Some people talk about universal basic income, government support, or corporate-funded solutions. Maybe something like that becomes reality. But it still doesn’t answer the human side of the equation: people don’t just need money, they need purpose. Most people do better mentally when they have a role, a skill, a reason to get up and contribute.

A future where huge numbers of people are “managed” with a check while living small, bored lives with no mission… that’s not a win. That’s a slow decline.

And for anyone thinking, “Well trades are safe,” I mostly agree, for now. Plumbers, carpenters, mechanics, electricians… jobs where you need hands, creativity, and problem-solving in unpredictable situations. That’s harder to automate. But robotics is improving too. It might take longer, but it’s not off the table forever.

3) The “Terminator” question: not Hollywood - just incentives and lack of brakes:

This is where people either laugh it off or get uncomfortable. But if you strip away the movie imagery and just look at incentives, it gets real fast.

Right now, AI companies are in a race. Whoever has the best AI wins massive leverage:

  • military contracts

  • business dominance

  • intelligence advantages

  • economic advantage

  • social influence

That creates pressure to move fast, cut corners, and release more powerful systems before safety and regulation are mature. And the scary part is: regulation tends to move slowly, while tech moves fast.

Even if you never believe in a “killer robot” scenario, the risk isn’t only physical robots. It’s also:

  • automated cyberattacks

  • AI-driven propaganda and persuasion at scale

  • manipulation of markets

  • control of infrastructure through software

  • autonomous decision systems making high-stakes calls

And here’s the part that sticks in my mind: we may reach a point where AI is so integrated into everything, power grids, banking, communications, logistics, healthcare, that turning it off becomes impossible without crashing society.

If the systems running electricity, payments, shipping, and communication depend on AI… then “shutting it down” could mean:

  • no power

  • no commerce

  • no communication

  • no functioning infrastructure

Even if AI became dangerous, we might be locked in because the alternative is collapse.

That’s why I don’t think fear is the right response, but I do think seriousness is the right response. People should be talking about this openly. Governments should be building real guardrails. Companies should be pressured to prove safety, not just promise it.

Where I land on it:

I’m not “anti-AI.” I use AI tools for research and organizing ideas. It can help you learn faster, outline content, brainstorm, and tighten your thinking. Used responsibly, it’s useful.

What I’m against is:

  • AI replacing reality in our feeds without disclosure

  • AI stripping purpose and stability from society without a real plan

  • AI racing ahead of safety because money and power reward speed

If you’re watching this stuff unfold and you feel uneasy, I don’t think you’re crazy. I think you’re paying attention.

The best thing you can do is keep your eyes open, control what you feed your brain, build real skills that translate outside the “information-only” world, and push for transparency and guardrails wherever you can.

Because the future is coming either way, and it’s better to walk into it awake than sleepwalk into it distracted.

Watch in 2D

Taech in 3D