Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Hey there everyone.
If you can hear me here, go ahead and throw up a heart.
And who do we got on the geek side right now?
I can't hear anyone from there. well it looks like we're having some technical difficulties everyone just be uh
thank you for your continued patience while we figure it out here Thank you. . Thank you. Hi, can you hear me?
Twitter was not accepting my request.
Twitter was not accepting my request.
I heard Stephanie there just a moment ago.
Yeah, you heard her many times in Echo.
Thank you for coming, Joey.
And we have longstanding OG geeks in the audience. Dave, Obi, Lexifix, Mojinder, or ZeroDuxMo, Golden, Kirin, Luxoray.
And I was on the other side of that geek thing.
I was here on time, but could not get my mic to work.
It's good to see all these faces here.
Yeah, well, they're out to see you.
Well, we're going to have a fun conversation.
We've got a lot coming on in this new AI world.
Personally, I'm terrified.
Why don't you get us started?
That kind of goes two ways, though, doesn't it?
It can either go... Well, I mean, just like anything.
It can either go really well, really poorly, or somewhere in the middle where it usually tends to land.
Why can't it be all of those?
I suppose that is the roller coaster of life, is it not?
No, I mean, you know, it's just like any tool.
It will always be used for many, many good things.
But if it's possible to use it for bad things, it will certainly be used for bad things by bad actors.
But anyway, it doesn't matter.
I think there's a lot more warnings that come through naturally to just warn those that are not apprised of how easy it is to fake things.
You can instantly generate a video with anyone's face, anyone's body, like anything, right?
And so these scams are just going to become more and more rampant.
And even at a mundane level, Stephanie and I both get, I guess we must get three or four at least very convincing phishing emails.
Lately, I've been getting a lot of docusigns.
And it's very easy to completely mimic everything.
You say, AI, make this X, Y, Z,
and it looks totally authentic.
You know, that's a great point.
You know, actually one of the unfortunately better scams
that I've seen was as people are actually attempting
there's organizations that will set up fake companies,
and they'll set up everything, fake website, fake interviewers, everything,
and all in the attempts of getting on your laptop
and setting in a dormant malware application
that just sits there and waits until you get somewhere good.
It's just wild. there and waits until you get somewhere good. Yep. No, that's, that's, I was just reading about this, both of you that for John, the phishing
has gone down from 16 minutes for crafting a phishing email by humans to five minutes,
a really sophisticated one by AI. And then phishing and social engineering has gone
has sort of leveled off they say
but now it's things like what Joey just said
something on your computer
you're here to tell us exciting stuff
amazing absolutely we can have some fun Let's not be ears about this. You're here to tell us exciting stuff.
Honestly, we have been having fun.
I've generated in the last, I'll just say, month alone, I've spent probably three grand on Google Gemini coding a platform the size of like an out-of-the-box Salesforce. We've got like 80 tables of architecture in the back end.
And it's just this massive platform that we've been able to build over the course of three months with just one person and three grand of tokens, right?
So like, it's just astonishing that this year, year and a half to build.
And we're able to accomplish it for, I guess, a total of $10,000 by the time we're done in three, four months.
Hey, Joey, I've got a job for you.
You and the rest of the world, John. You and the rest of the world. And I know they're lining up for you. That's right. You and the rest of the world, John.
You and the rest of the world.
They're lining up for Joey.
There's some really good people.
If you guys are looking for people to follow in Agentix, so I'm a part of the Agentix
I lead the team here in Raleigh North Shore.
So, yeah, the Agentix Foundation uh it's led by reuben cohen if
you guys don't know who that is i highly suggest that you find him on linkedin and follow him
his content is by far in my opinion the uh the the best of the best and i mean he's he's working
at the enterprise level has been doing um you know, agentics for, before it was cool, for like the last five, seven years,
been doing this stuff before it's really taken off.
And he's arguably one of the thought leaders when it comes to anything agentics AI related.
So, I strongly suggest he gives just every single day,
it's just amazing little nuggets that rapidize my day.
I went from back in January, I've been a technologist for 15 years.
Let me give some background here, I suppose.
I worked for Salesforce, initially for actually their tech support.
I started off in tier one, worked my way up all the way into the product management team.
And from there, I realized I just really love technology and wanted to dive in.
I actually started my own consultancy and realized I could build a product on top of it.
and realized I could build a product on top of it.
And ended up building some product,
sold that to another company and realized
I've just really loved building in tech.
And so I haven't stopped since.
And the only thing that keeps changing
is the tech gets easier and easier to build.
And so now we're at this amazing place
where I kid you not, my code is throwaway.
I will build for like three, four days straight and then realize like X, then I'll go show a customer what it was going to look like.
And they go, that's cool, but I'd love if you could just do X, Y, and Z.
And what took me three days to code, I now end up just throwing it in the trash and I
can rebuild the entire thing in a matter of a day now. Because I have all the documentation,
I've got all the requirements, and now it's quicker for you to throw it away and build it
from scratch than it is for you to edit, and cheaper, by the way, than it is to edit what exists. It is wild. Now that's not
always the case, but that's a very generalistic statement. But I equate it to like a 20x.
Like it takes me 20 times longer to edit code than it does for me to just start something new
from scratch. Partly you've got to look at somebody else's code and try to infer what the hell they're
Well, and that depends on how large it is, too, to be clear.
Like, I mean, depending on the context window, I mean, using Google Gemini, we've got like
a million token context limits.
These things are going to change dramatically as, you know, companies like Weka come into play and they start building the capability
for using server farms as memory,
like live real-time memory farms.
And it's just going to be wild.
We're basically going to treat a server farm
and be able to do live real-time querying
on basically all the internet.
And just keep all this stuff in context here in the next year.
It's just going to be wild.
You're going to go from a million context token limit to 10, 20, 100 pretty damn quick.
I've yet to hear a programmer ever say that anybody else's code was worth editing.
Well, here's the thing. It's never my code anyways.
All it is is my potential architecture, my potential user experience that I'm hoping to get out of this thing.
But there's a lot of randomization involved.
There's a beautiful thing about that, but But there's a lot of randomization involved. And there's a beautiful thing about that,
but then there's a really frustrating thing about that.
I don't know how deep you've kind of gotten
into this vibe coding world,
but it's just pretty intense.
Oh, I should say for introducing you
now that people are here,
that your company, GainSolutions.ai, was the sponsor for Geek's
first bootcamp, which included Vibe Coding. So thank you for that.
Thank you very much. Yeah, absolutely. Our pleasure. I'm so thankful. And I love what
you guys are doing at Geek. And I think, you know, I couldn't be prouder of you as a team
and where you guys are headed. I think it's exactly
what the world needs, to be honest.
We love you for that, too.
So let's go back to this platform
the replacement of 20 people.
have your AI learn on your own code that you improve so
quickly? Yeah, great question. So it's not even necessarily, I don't want to use the word
learning on our own code as much as it is just able to keep the context. So Google Gemini has really kind of changed the game here.
As well as, okay, if you guys are coders,
you're going to want to use a platform called RueCode.
It's very similar to Cursor.
Cursor is great for fine-tuning edits.
If you want to make some edits to your code base
and you have context yourself and you want to give it
manual context and figure it out, and you're not willing to, maybe you're trying to be extremely
budget conscious as well, and you don't want to make accidental changes that will impact the entirety of your system.
There's a very different place that I'm at because I don't have the system solidified.
If you've got a productionized system, then it's a lot more difficult to use Rue,
but you can definitely still do it. There's just a lot more code review that has to get done because it's going to more difficult to use Rue but you can definitely still do it
there's just a lot more code review that has to get done
because it's going to change quite a bit
versus something like Cursor
you're saying hey I want to change X, Y, and Z
Are these tools able to do QA very well?
I think Joey might have dropped off a little bit.
But that was my question, too.
Oh, yeah, he's trying to reconnect.
That's exactly where I was wondering, John.
By the way, if anyone wants to...
If anyone wants to come on up and talk to Joey, you know, from a coding perspective, put your hand up, give a request for a speaker.
So I don't know if you can hear me, Joey, but we were just wondering, do these tools work very well with QA or is that still really a human-centric process?
So X is censoring you, Joey.
They're not letting you speak.
They're not promoting Grok here or whatever. So I think that the tool of choice was Rue something.
Well, until Joey gets back,
what sort of led me to be thinking
all of these terrible paranoid ways,
you know, is again what we're trying to do, we were just, we've just worked on a
on a system that we hope will prevent the capture of session cookies, or at least make their capture
irrelevant. So session cookies are what what HTTPS, in particular, probably HTTP two as well, but nobody uses that to make questions and
responses to communications between a server and a user, a browser, I should say in a server,
make them stateful, so that we know that you're logged into the bank, and we can go ahead and
give you information even
though the the packet request is coming and in itself is not natively stateful so these these
cookies live in in a in a place on the browser but evidently it's not very secure and they can
be captured in various ways and then your session can be hijacked, passwords can be changed,
there's all kinds of very clever tricks. So we've we've worked on a system that that gets out of
that and generates complete security without interfering with that, that, that transport layer.
layer. And I guess the broader thing here is that the combination of Geek ID and this new system
of Geek Session Cookies is that it makes things provable. And the concern with AI is that our
ability to discern what's true and not, our heuristics are just not up to the job
because AI learns how to get around our heuristics.
So we see an email, it looks good.
We see a video from our aunt and it looks like it's her.
So our human heuristics are completely outclassed
So going back to cryptography, that is a thing that AI cannot
fake because it's cryptographically secure. So that's the layer we're working on. We're
building so that even though AI, not all AI, a lot of AI is good, even though there's a lot of
bad actors who are going to use AI very intelligently to attack us all, you can still build this cryptographic layer
that is unfakeable and therefore is
a foundation for protection.
So let's hope Joey's back because I'm tired of hearing myself.
I could hear you guys the entire time anyway.
So yes, I did hear your question around um so can it can it do qa right yeah um so yes it's it
can i haven't uh so root root code can root code is a visual studio code extension it's very similar
to something called klein um iedly, it is safer than Klein
because Klein was just getting way too many updates
and people weren't feeling like it was protected enough.
So they did a fork and now Roo is allegedly a more stable version.
Effectively, what you do is you can just put an API key into Claude or for Google, and you give it a task, and it agent, a super base administrator, all these different agents that I've configured using system prompts, just different system prompts and giving it access to maybe MCP servers, things like that.
And the orchestrator mode just says, when do I use which agent?
And so it's this really, really,
it just automatically runs through all the tasks
and goes and builds out the entire application.
who created this Spark methodology,
and this methodology is effectively making life significantly easier
for people like myself that can just type in a simple task and it comes out with an entire
application that is tested and working. In theory, in practicality, there's certainly some errors along the way that occur,
especially as the app gets bigger.
I would say if you're working on things that are,
I don't know, if you're working on an app that's less than 20,
30 tables big, if you will, and maybe has like 10 flows or something like that.
It's significantly simpler, right?
And that's something that can be kind of one shot, if you will.
But even with that, I'd probably end up starting those kinds of things on a platform
called Bolt. I believe it's Bolt.ai. Let me double check that before I give the wrong,
let's see here, Bolt. Yeah, Bolt.new, I'm sorry. Bolt.new. That platform, oh, go ahead, Stephanie.
Oh, no, I put my hand up to remind myself I have something to say later.
So Bolt.new is giving us access to be able to do the same thing,
but it's a lot easier for someone that doesn't have coding experience
And it does all the connections with Stripe and it does
the connections with Supabase so that you get a back-end,
you get a payment infrastructure,
and it integrates with GitHub for you.
Again, all of these things that if you're not a developer,
a site like Bolt.new or Lovable,
those are both great platforms if you're not a true coder, I think lovable.ai. Those are both great platforms
if you're not a true coder, quote unquote.
And even if you are, honestly,
it at least gets you started significantly quicker
And the UI comes out a lot better
than it does on something like Roo,
unless, again, you've written great prompts
and, or you just go find some great prompts to clone from,
but the out of the box ones are not gonna create great UI
I see two very compelling arguments.
I'm not sure which I'm convinced by.
On the one hand, AI, both for the UI, UX,
and especially for QA, you would think that that's just a perfect task for it because it can
go down a logic tree very easily, look at all the various branches, and it looks like it would
be much quicker and much more thorough than a human would be natively unless they're really, really anal and had a lot of time.
But on the other hand, maybe AI is too rigid and it just guards you against, you know, sort of like official errors.
Like we went down the logic tree you told me and I fixed that error.
But these combination of things that nobody would ever,
you know, you'd have to be creative to think of it.
I'm sorry, the same thing you are.
This all comes down to your prompting though.
So you do get to control that restrictiveness.
the biggest issue I run into John
is that when I give it an inch, it's creating a mile, if you will. Um, because that, that's
its purpose. Its purpose is to extrapolate, right? Intelligently extrapolate my prompt
into something that is valuable. Right. The problem with extrapolation is that it's going to make assumptions.
And when I build assumptions on top of assumptions on top of assumptions,
well, now it's the telephone game.
And so the farther away you get, the bigger the application gets,
Even if you've got a million context window, which is huge, when I use that million context window, it does not remember everything in that window.
It just doesn't. It can't keep it all in mind, even though it says it does.
And it just doesn't keep it accurate.
So those assumptions get you in trouble.
So humans do exactly, I mean, you describe a human process too,
so maybe this is an interesting question to me at least.
How is it that a human who has all of these things he's trying to keep in mind and has forgotten assumptions he's made and so forth, it's the same and yet it's different?
So what really is the difference or what is the same?
Yeah, it is not different at all.
I think it's just quicker.
And so the problem that you face.
Why isn't it simply better? And so the problem that you face is that it can build so much so quick.
And now I find myself simply just exhausted because I can't keep up with how quickly it makes these amazing decisions.
And then these really crappy decisions that really it's not crappy.
It's just assumed because it had to. Um, and because, because I'm only a one man band,
uh, I, I can't go out and, and, um, I can't read everything. I mean, I, I can literally
create a hundred pages of documentation in a matter of three minutes. Wow. Like it's, it's just wild what gets done. And like,
I can't read that. And I'm a very slow reader. That's, that's the big problem because I,
I, I actually think contextually. And so I think of like, every time I'm reading something,
it becomes, how does this impact the rest of the application?
Because I am trying to think of everything at the same time.
And so me going through 100 pages of docs is just not feasible.
Well, I've gotten to a place where I just cycle through the docs themselves and I say, so I'll have it build architect.
Like let's start with foundations, right? Let's go with the architecture and say,
these are the things I'm trying to accomplish.
These are the flows that I want.
And then I have it build out those flows.
I show it to an end user from a UI UX perspective,
kind of wireframe it, make sure it makes sense to them.
And then have it kind of go build the rest.
And then I show them a prototype.
I validate the prototype is working.
Then once the prototype is working,
I then basically throw the prototype away.
documentation based on the prototype and productionize it.
Give me what are all these tables,
what should they look like?
Make this an enterprise level application, because it doesn't cost that much more time and so then when i tell it to be
an enterprise level application based on the prototype again it's going to make some wrong
assumptions and and again i do have enterprise experience so like i i do have some architecture
experience and some i've got a little bit of everything i'm by no means an expert in anything
but i've got a little bit of everything and so i can kind of guide it in the right direction but what i end up doing most of the
time is just saying pretend i don't exist and and i i give it some very specific instructions i just
say uh you know create create me the documentation then create me like like give, do a test-driven approach, right?
So build all my architecture coupled with the test classes and whatnot, or a test documentation coupled with, like, pseudocode.
And then I'll have it reread that documentation while keeping the rest of the documentation in mind and say,
is there anything that's not following best practices? Make the assumption that I don't
know what I'm doing and tell me what we could be doing better here. Keep the XYZ perspective in
mind. Like in our case, we're building an app for real estate agents, homeowners, and for
service providers. And so I basically say,
keep in mind those three personas
and what they would want to each see
in this application at a high level.
And I just give it a high level documentation
as to what we've found these end users want to see.
And that's kind of our source of truth,
always come back to these key flows and points
And then I just say, but go wild on the architecture.
And then I just keep iterating on that loop.
And I'll say something like, build me the architecture.
And then I'll say, well, are there any issues in this architecture at scale?
in this architecture at scale, right?
Or something basic like that.
Or something basic like that.
Or I'll say iterate on each one of these tables
and make sure it does, like,
are there any duplicates that exist?
Like, are there any redundant tables?
You put me in mind of the Sorcerer's Apprentice.
cartoon sure sure this gets out of mickey's control there's just too many things happening
it does it very much does and and um you do have to reign it back in and and you do have to you
know keep again depending on the size of what you're building because when I've got 100 tables in my database, it can't keep all that in its
head at one time. And so it has to check 40 against the other 40, and then maybe it has to
go back and forth between 10 different combinations of it just to make sure that there's no redundancy.
That's really interesting. I really agree with you, what you're saying implicitly, at least, and that is that
the generalist actually is becoming increasingly critical because people are being trained
as specialists if they're being trained at all.
But we now have these things that can automate a lot of specialties.
But if you don't have the general control to know what you're actually trying to do,
you know that there's a product, there's a business, there's machinery that it has to go
on and so on and so on. It's very hard not to make an error someplace, let some balls drop.
That's a great call. I think the way I would describe it is most importantly is,
do you know how to ask why you're doing what you're doing?
And how do you teach that to somebody? How do we, how do we make people like that?
You know, that's a great question. Let's ask, let's ask AI.
I don't think it's going to tell us.
Goodness. Well, you know, but, but it seems like now you're, you're trying to,
so Stephanie and I have something that we call the one brain theory.
If there's a process that you can, so when I do my own research, I try to do it,
if I have to group the whole mathematical structure in my head. And if I've got it
outsourced to different people, it's more than twice as hard if I have a co-author, honestly,
to do things. Because they know some things, I know some things. He doesn't know what I'm
thinking about. I don't know what he's thinking about. And if we've really divided the labor,
unless they don't really interfere, we spend as much time coordinating and talking to each other
and making sure we understand what one another is saying, that there's a lot of time wasted.
sure we understand what one another is saying that there's a lot of time wasted so so problems
bigger than can be contained in one brain are exponentially harder two really good harmonized
brains can work sometimes but three years the problem of scale to begin with isn't it right
right yeah well that's where they have hierarchies right that are really inefficient
well they are better than nothing but they're required. Yeah, no, I agree.
It's required on the agent framework side.
So when you're building out these agents,
creating layers of subtasks within subtasks within subtasks
is actually one of the best things you can do.
So that it always comes back to, again, that source of truth.
And if you can, again, if I was to scale this properly, I'd grab another few devs, and I
will soon, but I'll grab another few devs and I'll have them review at that layer, right? I'll just
put them one layer below me and do the same thing, right? Just like anything else. So you can imagine
a pretty effective hierarchy to divide out the problem.
So I've already begun doing this, and actually we kind of ran into some hurdles. So one of the first things we did was hire a UI UX person because that's not my specialty.
And it's not something that AI is great at.
something that ai is great at it's it's good don't get me wrong but it's not great um and so
It's good, don't get me wrong, but it's not great.
we we picked up on the ui ux person and and she's uh she's a great ui ux thinker she's she can code
but she's not a coder right so it's kind of that um like she's willing to jump into into
ru and jump into cursor and and you into Cursor and start coding.
And we found out really quickly that when she would get in there,
because she doesn't have the context of the backend and the middle layer that I'm working on,
than if she had just built a prototype and handed it to me
that it ended up actually just costing me two extra days than if she had just built a prototype and handed it to me and then said,
and then said, hey, go push this into the code base.
hey, go push this into the code base.
Me going in and trying to fix her code,
like if I'd built it from scratch, it would have taken three days.
And me taking the code that she had
and trying to put it in there took me five, right?
So this comes back to that thing of new versus edit.
Well, it's also the friction between brains.
She's probably very good.
But by the time you understand them and you get them and you try to fit them into your framework, the transactions cost has just overwhelmed the product. Well, and what, yeah, so what we ended up doing going forward
was that she would just build throwaway code that's experiential.
I'll go review that code.
I'll have AI, like her output is, you know,
take this experience and output a user journey requirements
and where it needs to get integrated into what. And then I take that and I review it
and make sure it has all of the appropriate architectures
at our back end and middle end layers.
So I have a question about that
because we're trying to serve our community well by sort of parsing out these different things that are going to be needed in the future and thinking about how teams will be created in the future.
So would you say that you need her training specifically?
and like you have five people's training, right?
And like you have five people's training, right?
So how would our geeks think about what they need to do
and where they might fit in?
And I don't think there's any right answer to that.
I think every person is different.
And that's what AI actually enables
is you get to find the places you're not good at
and you can augment using AI.
Like our UIs were pretty good, but they weren't great, right?
But in my opinion, UI is one of the – it's what makes an application sell, to be honest.
So we put our resources there simply out of – obviously we want to sell the app, right?
So that's the first look.
What end users see is the most important thing in the user journey. But as far as how people work together, it's going to be highly dependent upon the skills that each one has.
And what do you like to do?
Because it's going to get to this place where everything is so cheap.
going to get to this place where everything is so cheap.
I mean, yeah, code is just going to get so cheap
in the coming months, really.
It's going to get worse than it already is.
Well, that goes back to John's point
that you need to have the picture.
You keep saying context, right?
And also, I was wondering when John was asking about QA, I don't think that AI, as I understood
it like five months ago, was really great at math.
So how good would it be at finding edge cases and testing those?
So it is pretty good at finding edge cases.
It's better than I am at finding edge cases, to be honest.
But you have to tell it to, right?
So again, this all comes down to your prompting capabilities.
And if you can't give it too much at one time, you got to break it up.
And so if you can give AI these bite-sized chunks of what it needs to do,
this is kind of where that orchestrator mode comes in really big handy,
is just building a process that breaks it and chunks it into meaningful bite-sized pieces
that AI knows how to understand.
And again, Reuven Cohen is the guy to follow in his Spark methodology.
But the other thing, Stephanie, I did get to mention on that QA front was
Roo actually can take over your computer if you let it.
And not only do CMD, terminal calls terminal calls, or command line calls,
but it can actually take over the screen as well.
So if you're using Cloud API and you run a local terminal,
It'll start the server and it'll start running through your app,
and then it'll take the console logs and it'll put it
and then test it and say, well, okay, that failed.
And it'll go through the cycles and go fix itself, right?
So you don't even have to go testing from that perspective.
Where you, so QA is dramatically taken care of
I'm not using that feature yet, so I can't tell you.
That's all theoretical for me, but I've seen it work,
and it looks pretty solid.
I don't think I'll have any issues.
I've got two related questions.
So I just spent the last – I spent five hours in the last three days
trying to reserve a flight for a conference I'm going to through Concord, which is an Oracle application.
Just, it almost was like it didn't want to give me information.
It was like information technology, that technology was a way to deny me information I was trying to seek.
But that's a huge company.
They have all the money in the world. And their interface and their workflow is just atrocious. So the question is this,
why is that? Maybe they just don't care. They got so much money and people are committed and
they're not going to change. But maybe it's that you're kind of unique.
I mean, you say it's easy.
And yet there are so many people out there that have these legacy skills
and legacy expectations that haven't built up to where you are.
Maybe it's just going to take a, you say cheap,
but only cheap if you're one of the elect, the 0.001% that know how to do it.
Well, there's a few things going on here, in my opinion. Let's...
Just for a minute. Go ahead. Okay. Yeah. So let's take the enterprise conversation first.
Let's take the enterprise conversation first.
So we, in working with enterprise,
both as a consultant and as an employee
for quite some time, for the last 15 years,
oftentimes there's never quote unquote enough budget
for whatever it is. It doesn't matter what it is, it's just there's never quote-unquote enough budget for whatever it is.
It doesn't matter what it is.
It's just there's never enough.
And so people are forced to make priority-based decisions,
and that leads to often poor execution.
And so you get unfinished products or products that just aren't as great,
or we end up throwing the project,
whether it be offshore or something where a lot of
the developers are there just to do a job and just to get the code out,
and they don't challenge anything or anyone,
because challenging would mean that you're probably not going to get the next job.
Yeah, no, absolutely. It's legacy.
Now, we're in this transitionary period.
However, I think you's still,
you still have people in charge that are still,
like, they're not going to change.
If they can still do it just cheaper,
that doesn't mean it's going to be better.
It just means they're going to do it cheaper.
There's hardly a reason not to do it better.
I mean, you could fire the whole Conquer team, I'm sure there are hundreds, and hire your crew of maybe eight people, give you three months, and you'd come up with a far better thing cheaper.
I often find that leadership teams, one, you've got to sell them on that concept of this is going to work for you.
And then two, the people that are in my position are people that are not going to go – why would I go consult for an organization if I can build my own thing and be the boss?
This is an opportunity for everyone to go be their own boss where it wasn't feasible before.
You can spin up entire companies in a matter of weeks.
The only advantage they have is they've got penetration.
They've got the customer base.
So if you can marry them.
Absolutely. they've got the customer base so if you can absolutely absolutely but keep in mind that
so like the social world has changed dramatically as to what people are willing to do i mean let's
let's take bmw who just uh i don't know a few years ago they were they put out this car that
was if you want to use the seat warmers you you've got to pay $6. Right?
Like, I don't know if that's still a thing.
I haven't looked at it since. But, like, hearing that and, like, subscription model where you have to be subscribed to your HP printer if you want to be able to even print.
Yeah, no, I'm just, just no. that a single person has and the amount of weight a single person's influence can carry just from
just from word of mouth when it's authentic becomes significantly farther than it's ever
gone before and so when a good product meets a good ecosystem the product fit will fit
a good ecosystem, the product fit will go a lot farther than it ever has and can scale
Now, that's not necessarily a good thing because a lot of people don't know how to handle scale.
And so it's still about finding the right people and finding, I mean, you still got
to be able to build a business model and all that jazz with it, right?
So I think there's going to be a lot more failed businesses than ever before,
but there's also going to be ones that just skyrocket to the top because they
have the right people, they've been doing it a while,
and they have the right product that's been holding,
and that's what's been kind of holding them back is this inability to compete.
One of the large companies that are actually ahead of this curve?
I mean, clearly it's not Oracle. Is it Google? Is it, is it X?
Who's, I mean, X can't even get me connected.
Who's who are the companies that are really taking advantage of this?
I mean, everyone's, everyone's trying to um you know so
everyone is trying to that's that's that's the the reality is um no one these these ceos and ctos
they don't want to share their game plans either yeah it makes sense because like imagine you're
in their shoes and and you they know they can be disrupted so they've got to go on the defensive
and in acquisition mode in order to try and squash these bugs if you will before they they
become monsters for them to have to acquire or whatever the case is.
But you can't do it. I mean, it's too fertile a ground.
You're going to get cockroaches everywhere.
That's the, that's the, maybe.
I mean, we saw this before though, right? This was, this was the.com when Microsoft bought everything.
It's just more and bigger.
It's exactly correct. It's more and it's bigger
their strategy has to be,
if I'm Google and Anthropic,
these guys are working on, right,
and the deep research capabilities.
And that's because if I can maintain the search then that means I maintain your access and I mean I can slow anybody down however I want to slow
them down that's your cue oh what is my? I forgot. Way back when and when we were advocating for net neutrality.
Oh, well, yeah. Well, I'd rather go a different direction, actually.
And does it worry you that you're using these these Google AI tools and they are your competitors?
They may try to absorb you or thwart you.
You know, Apple is famous for just adopting successful apps into the core of the operating system.
Why isn't Google listening in to all of your prompts and in parallel building your applications?
I mean, how do you know? Are you concerned?
You know, I don't get concerned about these things
At the end of the day, someone has to run it
And someone has to make it successful
And someone has to be able to find adoption
It depends on what you're making, though
But they have the customers.
They can get adoption just by rolling it out.
I think they've got something much larger to think about,
So if they just roll something out,
like, okay, let's go to how they rolled out
all of their tools over the course of like the last month.
They haven't put nearly any AI things out for the public.
They were six months late to the market
with something that was valuable.
Nothing about Google other than the 2 million context window was good.
The model itself was terrible.
compared to Anthropic and ChatGPT, they were crushing it.
And then one day, really within a two-week period, Google launched everything under the sun.
And now I won't touch anything else.
No, so where I was going with that is that there's this idea that Google has to stay trusted because trust is the new currency.
Because to your point, it's all about adoption.
I trust them to be enterprise. I guess you give me some sense.
I trust them to be stable.
I trust their APIs to not fall off on me when I need them.
So from a data privacy and all that stuff, man, that stuff was out the window 20 years ago.
That can't even be the conversation anymore, unfortunately.
That's just too far gone.
I don't see how they cannot, how they could not help but win, because not only are they watching you in real time build your applications, they've got AI to understand what that is, see what other people are building and how they might integrate.
And then they read your email to figure out who your business contacts are and what your business plans are
and your Google Docs. I mean, they got it all.
And they got an AI to tell it.
Remember what happens when we just put in
think AI is self-limited in some ways?
reviewed to some capacity.
Again, putting it at that...
Is the compute there? Sure, I'm sure they could figure that out.
But I don't believe that they could absolutely crunch the data down into small bite-sized pieces to get competitor analysis and roadmap analysis.
And I'm sure that they potentially do.
I don't know. But the moment that you try to replicate or, again, expand on any detail, the more complex it is, the harder it is to expand. The more assumptions it has to make, the farther off it becomes.
CEO of Microsoft that's living inside their
servers and the actual CEO is just not clever enough to
grok everything that this guy could do. It's just
too complex for a human to understand and implement. Or maybe the
AI also is confused and doesn't really understand what the implementation would
it will change because you you can build it to get there don't get me wrong
i think but you got to know what you want that's that's the real key here with with anything ai you
have to know what you want and i do have a hard stop here in five but uh out of out of anything
if if you know exactly what you're looking for,
again, it's all in that prompting.
And then creating those agents and being able to...
Well, that's true now, but let's wait until next Thursday.
Then we'll get a meta that enables us to figure out the prompts.
Maybe, but again, the larger the context window, it still doesn't get it accurate.
There's still so much stuff.
With just a million token context limit, it probably loses 20% of what's in there.
of what's, I don't know, maybe 10% of what's actually in there is lost. So I find myself
I don't know, maybe 10% of what's actually in there is lost.
having to, whenever I know that it's going to be a big task, I'll have it go through and say,
okay, is anything missing? Okay, is anything missing? I literally ask it five times in a row,
is anything missing? And five times in a row, it comes back and says, oh yeah, I missed this.
Thanks for asking. Right? And So some of those things are critical.
You can't run the app without it.
And so, again, this is the problem,
is that you're missing critical context,
and that's a much more difficult problem to solve.
It's probably going to take a few years on that one.
What do I know? I'm just some
No, you're not a random dude. Why don't you give us one minute
of wise words in summary.
Tell us what we should all do.
Tell us what you're doing if you'd like to.
I want to know what I should do.
there's three tools that I use every single day.
And again, that's RueCode.
It's just a VS Code extension.
Everyone needs to follow Reuven Cohen on LinkedIn
and grab his Spark methodology.
He's got a GitHub repo that you can just grab it all from.
Join his agentics foundation on Discord.
And then if you need to use Cursor, Cursor's another great option.
Those are my favorite things.
It's really fascinating. I's great. Yeah. So. Yeah.
I really appreciate you taking the time.
I don't think I'm less terrified,
we'll talk again offline,
but I really do appreciate you coming on.
And these things are recorded so you can send your friends over to listen to
Amazing. Well, thank you all for having me. I really appreciate it.
And it's a pleasure to be here.
Thank you. Bye. Yay. Bye. Okay. Bye geeks.
Let's see before I go, here's an announcement about the hackathon.
Submission of project ideas have been extended to tomorrow, Friday, by 11.59 p.m. UTC.
The details are in Discord in the boot camp section and good luck
and have fun and that's it for us this week okay thanks bye everybody bye everybody Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.