📝 Field Notes from the Team
The Origin of Lebotski
What started as a casual mission — “I want to make some passive income” — quickly evolved into something more personal, creative, and community-driven. This is mostly because the answers I got were not something I felt I had the time to commit to, nor the vision to execute. I’ve always been fascinated with new technology and wanted to see how AI could create not only passive income, but meaningful impact.
The concept: build something that makes AI more approachable for everyone. I can’t code, but a bot can. Let’s start by putting a website together. Everything here has been effectively navigated using ChatGPT to help me put this site together. Not great… yet.
Here’s what my experience has been so far. Working with ChatGPT is both amazing and frustrating. For lack of a better phrase, it wanted to help so badly, it often said “yes” to things that just weren’t technically feasible. It took time — and a fair amount of trial and error — to get on the same page. I had to dig into prompt writing, understand the true strengths (and limitations) of the tool, and learn how to collaborate more effectively with an AI assistant.
But out of that process came Lebotski — a digital dude, an avatar representing how I go about investigating AI tools and bots with curiosity and the generally sarcastic spirit of AI-powered tech integration. The goal is to make him more than an avatar and mascot, but through the use of new tools and understanding, to become a guide through this fast-moving world, showing us how to use tech in a way that actually fits our lives.
The Bot Abides is now a growing hub — a resource for AI tools, a prompt library, and a space to share what works (and what doesn’t). It’s a living experiment in turning learning into value, and automation into lifestyle. Thanks for being here. We’re just getting started.
🎳 Field Note #2: The First Time AI Actually Saved Me Time
I used to spend a genuinely embarrassing amount of time planning trips. Not the fun kind of planning where you’re excited and browsing Pinterest. The logistical nightmare kind — cross-referencing flight times with hotel check-ins, figuring out if two restaurants are actually near each other or just “near each other” according to Google, building a rough daily schedule that doesn’t require a spreadsheet degree to read. I’d end up with seventeen browser tabs open and a headache, and we’d still end up winging it half the time anyway.
So for one trip — nothing fancy, just a long weekend somewhere new — I tried doing the whole thing through ChatGPT. I told it the dates, the city, the vibe I was going for (low-key, good food, walkable), and approximately how much I wanted to spend on accommodation. What came back was a draft itinerary with morning and afternoon options, a list of neighborhoods to consider and why, and a note that two of the restaurants I mentioned tended to be booked out and I should probably grab reservations early. It wasn’t perfect. A couple of the suggestions were outdated, and I had to double-check a few things manually. But the baseline was there in about four minutes instead of four hours.
What I learned: AI is a really good starting point. Not a finish line. The value isn’t that it does everything for you — it’s that it gets you 70% of the way there faster than you ever could on your own, and then you refine it. You’re still in charge. You’re still making the decisions. You’re just not starting from zero. That mental shift — from “this should do it for me” to “this helps me do it better” — was probably the most useful thing I figured out in those early months.
The other thing I noticed: once I started using it for travel planning, I started noticing all the other places in my week where I was doing that same kind of slow, grinding prep work. Writing catch-up emails after missing a meeting. Summarizing a long article I didn’t really have time to read. Drafting a tricky message to a vendor without accidentally sounding passive-aggressive. Turns out, most of those things take way less time than I thought — as long as you’re willing to let a bot help draft the first version.
🥃 Field Note #3: What Nobody Tells You About AI Tools
Here’s the thing they leave out of most AI hype posts: this stuff makes stuff up. Not in a small, forgivable way — sometimes in a spectacular, totally confident, completely wrong kind of way. I once asked an AI assistant for some background on a local business, and it gave me a detailed history of the company, including founding year, key people, and an office address. It was all fiction. The business existed, but nothing else in the answer was accurate. And the way it was written? You’d never guess it was fabricated. It sounded more certain than an actual Wikipedia article.
That thing — where an AI says something false with complete authority — has a name. It’s called hallucination. Which is a great word for it, honestly. The AI isn’t lying. It’s not trying to trick you. It’s pattern-matching on a massive amount of text data and generating something that sounds right, even when it isn’t. Once you know this, it changes how you use the tools. You don’t just accept the output. You verify the stuff that actually matters. You treat it like a smart friend who sometimes misremembers things, rather than a reference book you can trust blindly.
There’s also a learning curve to prompting that no one really warns you about upfront. At first, I’d type something vague and get back something vague. I’d get annoyed. I’d try again. Still vague. The problem, most of the time, was me. AI tools respond to specificity. “Help me write an email” gets you a generic email. “Help me write a short, polite email to my landlord asking about the broken heater we’ve been waiting two weeks on, without sounding like I’m threatening legal action but also making it clear I’m serious” — that gets you something actually usable. The more context you give, the better the output. That’s the core skill, and it takes some reps to develop.
The good news — and this is genuinely encouraging — is that you don’t have to be technical to get good at this. At all. Being a decent prompt writer is closer to being a decent communicator than it is to being a programmer. If you can explain what you need clearly in plain English, you’re already halfway there. The rest is just getting comfortable with the back-and-forth: ask, read the response, push back if it’s not right, redirect. It’s a conversation, not a command prompt.
A few things I learned the hard way, in no particular order: Don’t trust it for recent news or current events — most models have a knowledge cutoff and will guess confidently about things they don’t actually know. Don’t use it to generate anything legal or medical without having an actual professional review it. Do use it to get started when you’re staring at a blank page. Do use it to punch up something you’ve already written. Do give it feedback — telling the AI “that’s too formal” or “shorter please” or “try a different angle” actually works. The tool gets better as the conversation gets more specific.