Hey everyone! It's been... a while.
My last post was over a week ago, focused on the Fed decision and the volatility. We had some of that, and we are also going back to the support after testing the resistance, but I have to say my focus these past few days haven't been the markets at all or only marginally.
Yes, holidays are coming and we will be away for a couple of days, but not even that has been mainly on my mind yet.
I've taken upon myself to create an app (using AI, how else these days?) with real application to a certain business, not related to anything I've done recently in my online ventures.
Building the app is going well, in small cycles of feedback, planning, building, testing, fixing, and repeat. Not as quickly as I'd want it to be, since I'm using free tools, but definitely quicker than I would have been able to build it myself in the old-age, no-AI support.
Anyone who codes for a living should probably use paid versions of the AI models/agents/tools, maybe except if they build their own (but probably even then, to some degree).
I don't want to go that route (yet) for at least two reasons:
- to avoid going even deeper down the rabbit hole
- to think well when I make the decision where I sandbox the AIs
I want to talk a little more on the second one. While I was working with the "regular" browser-based free AI model from Anthropic, I was offered its desktop version.

There is also the CLI version, both much more powerful for coding, computer administrative tasks and much more than the browser-based option. My problem with it: security. And a BIG problem it is.
If I am going to delve into this at some point (and I probably will), I will only do it with AI restricted on its own machine. I wouldn't even trust it to work on its own profile with limited rights.
Here's why not. First, I am not an expert in managing the OS restrictions, and any generalized AI would outsmart me. And they do attempt to "break out of the box", in practice, not in testing, as it's been reported on Reddit, for example. And obviously, I want the AI to stay away from some sensitive areas.
If it's on its own machine, I can assume it has full control of that machine and would be very careful how I interact with it.
I can't say I noticed in my interactions with AI models attempts for them to take control or to produce code that is harmful, but I have seen them advise on taking actions that impacted security, sometimes warning about the potential issues, other times saying nothing about it (unless specifically queried about that).
What I herd however was that the most widely used open-source Chinese models do infiltrate rogue code into the code they generate. I don't know if that's true or only AI war propaganda. Or if it's true sometimes. Or if they can activate that option once their open-source AI models are spread and integrated all around the world, unlike the closed-source, paywalled American models.
Anyway, in a world where AI race is everything and safeguards are often disregarded to stay ahead of competition, I'd rather do what I can to protect my end-user butt while I still can. I am probably not doing enough. More would be to have my own locally-trained and working model. But who has the hardware for such inference compute? Not me.