JW Consciousness Stream - 24 November 2025
This entry represents something like my stream of consciousness for Monday, 24 November, 2025. This is basically like a journal entry that I work on throughout the day and then don't go back to correct errors.
Originally published as: https://deliverystack.net/2025/11/23/jw-consciousness-stream-24-november-2025/
Some topics in this post:
- Concerns about "Artificial Intelligence"
- Working with long LinkedIn threads
- How and why I use social media
- Why we're creating a future that nobody wants
- Converting images to coloring sheets
- Whether to discard life, and why
- My financial advice
- I got the pins out of my foot!
- Introducing the Superfile file manager
Well, I actually started this Friday evening, but don't expect to have time to write much over the weekend, as the girls don't have school.
I published this on Friday:
The following video brought me some new concerns about AI, especially the part after about 23:00:
Here's a google gemini AI summary of that part:
- OpenAI is spending millions on cinematic ads targeting Gen Z to build an emotional connection and onboard users, despite the free service losing money per user.
- The true business model is based on collecting massive user data and control, not just subscriptions, to build infrastructure and dependence.
- AI companies collect user conversations for training, often without explicit consent, creating a "behavioral surplus" for surveillance capitalism.
- This conversation data is uniquely powerful for building accurate psychological profiles to predict and potentially modify human behavior.
- Research indicates a significant left-wing bias in mainstream AI models like OpenAI's, which can shift users' political views after only a few interactions.
- AI companies, including OpenAI and Google, have faced criticism for censorship and altering historical information/ethnicity in their models.
- OpenAI's CEO is launching Merge Labs, a brain-computer interface (BCI) startup specializing in non-invasive neural interfaces that aim for "read only" access to thoughts.
- OpenAI removed its military prohibition and is now working with the US Department of Defense and military contractors, alongside other major AI companies.
- AI CEOs have issued public, dramatic warnings about the existential risk of AI, but immediately accelerated development and lobbied against robust regulation.
- These safety warnings are argued to be a corporate strategy for regulatory capture, creating expensive compliance moats that only industry giants can afford.
- The overall pattern of AI development is framed as creating a dependency (the problem) on an AI "best friend" to solve the isolation Big Tech originally caused (the cure).
- The long-term risk involves the extraction of personal information, altered thinking patterns, and becoming dependent on AI for everyday decisions, regardless of a financial "bubble" popping.
Saturday I published this:
Sunday I published this:
Here's a depressing video I like to share that helps to explain why humankind will not address critical issues such as climate change and AI's negative impact on humanity:
I use this free tool to convert photographs, scans, and other images to coloring sheets for Wendy:
I published a couple of things on Monday morning:
- https://deliverystack.net/2025/11/23/discard-life/
- https://deliverystack.net/2025/11/23/my-financial-advice/
I got the pins out of my foot! It still hurts though.
And I published this Monday afternoon: