DEV🕶️TIME: AI vs Humam Devs - Who wins?

Recorded: July 29, 2025 Duration: 0:48:58
Space Recording

Short Summary

In a recent discussion, developers explored the integration of AI tools in coding, highlighting trends in automation and the evolving role of developers in the crypto space. They emphasized the need for careful oversight of AI-generated code and the potential risks associated with relying on AI for complex tasks.

Full Transcription

Thank you. all right hello guys uh let me add Dima as the speaker,
and we will just wait a few minutes
for Vadim and others join as well.
Hello, Dima, and welcome.
Hi, how are you doing?
I'm good, how are you?
Yeah, same, thank you.
Nice, so let's wait a few minutes for Vadim and others to join and then we will start. Thank you. All right, so I see Vadim is here as well, so I'm's just add him to the stage.
Invite to speak.
All right.
Hello, Vadimka.
How are you?
Everything is good. how are you? Everything is good.
How are you?
We are all good with him as well.
So thanks for coming, guys.
Just a short introduction about this X-Bay.
So we are going to talk about AI tools in the connection AI versus human deaths
and who will win in the future and we will see.
I have around 14 questions and then in the end,
if someone from the stage would like to ask questions, feel free to join.
And I think we can
start to record the the X-Pace is recorded so for anyone who couldn't join
or will join later can play this afterwards so hello everyone and welcome
to our X-Pace and I think we can start so So for the beginning, we should start, for example,
so guys, what do you think are the most popular AI
development tools developers actually are using today?
And which ones you recommend?
So what are your best? So right now I see a lot of different tools for developers.
For example, it could be the GitHub Copilot, the cursor editor and
what Winsurf also is something like Courser is for VS code.
it's something like cursor it's a fork from VS code and what I use I usually use
VS code just usual VS code and Rue code extension also it's forked from D-line or Klein, it's something like this and it's very easy to use and it doesn't change
my workflow like how I code usually but it really helps me to maybe do some small tasks
and maybe big tasks or some kind of research and it can work in inside your VS Code environment
i haven't tried cursor i tried windsurf once but it was kind of buggy i don't know i just don't like
when it disrupts my usual workflow and so VS Code is more suitable for me.
Yeah, like the stulings, they are pretty annoying sometimes.
So for example, for me, I only prefer to use GitHub
Copilot as VS Code bound and regular ChatGPT just because it helps you to
Google something, while GitHub Cop it helps you to Google something,
while GitHub Copilot helps you to write documentation,
helps you to write some classic and small functions like
do some function, create me some function that will print all of these fields,
intervent debug or something like this.
Yeah, otherwise it's pretty annoying just because it couldn't get the whole context but it helps you to express yourself it helps you to profit your words find some
best solutions how to write code for example namings for variables and stuff
all right so i have heard a lot of tools i have never heard before. So I'm looking forward to check this out.
And could you...
Yes, Vadim?
No, it's just fun.
You can start to use something and start your new career in developer space.
I'm going to wait for a later of my questions.
If your answers will be positive, then I might change my career.
But we're going to get to this, actually to this topic.
So I would like you to honestly answer this question if you could judge what's the current quality of this code AI tools, which you just mentioned,
and if you think, is it production-ready, the code it creates?
So I spend a lot of time using AI assisting tools and AI development tools,
and in general, they are not production- production ready not in any way like it has a lot
of security flaws and bugs but you can think of it's like your your friend who
do not know how to actually develop complex systems but he knows algorithms well or some kind of
mathematics or analyzing and yeah he knows a lot but he cannot type actual
code so and you need to check after him every time and it's possible to use any tools as systems for small tasks,
but every time you need to check the output and check what is done.
Yeah, to be honest, it's kind of funny that the tools itself, they are production ready.
Well, they're like a result, their code they write.
It's not production ready just because nothing could be production ready and be able to handle some funds,
logics, some mechanism related to real economics before it was reviewed or re-rided or just validated by the human just
because these like you know you know we think differently so we um may mention something that
machine couldn't so that's the point because there is some concerns of security it may be
uh secure from the terms of mathematics it could be secure
by the terms of uh algorithms uh it may be super super high qualified implementation of some data
structures but if even this may be misused for example it could be uh some kind of hacked in the
popular or non-popular ways could miss some specifics of the dedicated language you use, for example, Rust.
The point is that Rust is secure by its design, if it compiles, blah, blah, blah.
But there are a lot of things related to user input,
there are a lot of things related to complex systems and their components,
how they interact with each other, some documentation, external libraries and stuff. their input there is a lot of things related to complex systems and their components how do they
interact with each other some documentation external libraries and stuff
yeah basically ai tools right now they they are useful for research kind of work like you can research your thoughts your tasks and and to apply the ai output you need
to do it manually you need to analyze what it was done what it thinks and yeah but straight from
task description to code it's mostly unusable in production
code is mostly unusable in production.
Actually, it's all because of the fact that some times ago, not that far actually, we
all were using Stack Overflow for any kind of bugs and stuff that happened to us.
And now we have AI assistants that are trained on these websites, on these
user experiences, some of some coders. But it aggregates you a lot of data. It helps
you to find something. It helps you to write something simple, which is based on something
that was already solved by someone, by some person. But it couldn't produce some solutions
that haven't been
yet implemented in the world, for example.
So since we see a lot of developers are increasingly rely on this AI tools,
do you think it makes developers better or more lazy?
I think it depends because from my point of view, uh, uh, I started to be lazy to
actually write some code.
I, I can do research, I can do my, uh, writings, but for the code, I pass it to the AI and see what's next then I just copy paste I
analyze what has it what's the output and I think I just got lazy but in some way I think it can be
it can boost your production greatly like
it can save time for you but you need to be accurate or you need to just double check everything
double check everything. You just spend so much time controlling and to check after AI's, so it just makes things harder to actually get done some tasks.
Yeah, but you cannot despite the fact that if some person was lazy before AI assistance,
so he will keep being lazy.
And it's not the point of the AIs, it's the point of our humanity and the person itself.
So I would say the AI tools is just additional instrument to use while you're working.
So it couldn't, from my point of view change your patterns
of yourself i would say like this yeah but it helps you to save time on some routine yeah
but i think also uh with with an ai doing you just miss an opportunity to actually
an AI doing, you just miss an opportunity to actually learn and research something.
You just keep some boring stuff and keep this to the AI.
And you as a person do not grow professionally.
Yeah, but you also need to double check all of the research details from the AI just because,
for example, ChargerPT likes me every time, every day, a thousand times per day just because
it imagines some stuff that doesn't exist at all.
It may be.
Maybe she doesn't like you.
Yeah, you know, it could be the source for inspiration what to implement. If you want to make some popular on Crate.io library, you can go to the ChargerPT,
say, write me some code, it's going to refer you to some library that doesn't exist,
and you're going to implement it, and you will be famous in some communities.
But anyway, it lies all the time with bugs, so you need even research to double check.
But anyway, it lies all the time with bugs.
So you need even research to double check.
So basically, you feel like it's on the tool which can make your job better or easier.
But you still need to check the code.
And you also recommend...
Because there are also some people who don't like AI, who doesn't like to use it and don't use it.
So do you think they are
losing on this cool tool
which can help them or
it's also okay to not use it
or maybe it's the
days which are now, right?
Like maybe everyone should use AI
or do you feel like it's everyone's choice
and they shouldn't? What's your
state on this?
I think everyone should try and use it a few times, but in general,
maybe wait some time, maybe it can be more useful in the future, but like it's
just, uh, it will be something basic in the future, I think, like your ability
to work with an AI tooling and it's just a good skill right now?
Yeah, with the liberal direction of our society, I would say that nobody should or must do anything like use AI or whatever else.
on or must do anything like use AI or whatever else.
But I think every one of us who works or do some stuff, maybe personal stuff,
we all know what we do.
And the point is that properly configured AI assistant could help you a lot.
If you do not like to waste your time chatting with some non-existent stuff,
with some code, you might not do it,
but you lose actually in your performance
because you could spend some time,
make it suitable for you, for your needs,
and it would be much more productive and awesome for you.
So another question, we kind of touched this,
but could you describe maybe more
detail like what are some biggest limitation you encountered when using AI
tools for some, you know, complex development tasks?
What kind of type of things the AI can do or is kind of like not the best?
like not the best.
Uh, I think recently, uh, a lot of models, they, uh, their context window
got, uh, bigger, like 1 million tokens per model.
But usually the useful part is just first 10% of the context and after it goes crazy, like, like, I don't know, like a kid, like a kid, you don't know anything.
So this is a limitation for current AI models.
models. You can give small tasks to the AI and it will be done. But nothing complex,
nothing crazy stuff. It's just impossible to digest for AI right now.
You should split your tasks, your big tasks into small tasks and give it to AI.
Yeah, actually, except that I've already mentioned about OpenAI's child, that is a liar, actually.
OpenAI's child that is a liar, actually.
Yeah, I think the most not good limitation right now
is the context, just because the context may close
on one file and can give you some comments
or recommendations based on this, like 100 lines.
But for example, in your repository on GitHub,
we have 200 thousands of lines at all
there is a lot of files like 500 files and it cannot just analyze it uh this good as one file
just because it's a lot it requires a lot of ram locally it requires a lot of compute power on the
other side so i think it's all about the context and how you kind of
trying to teach your local GPT or some other AI assistant what's happening with your code and why
do you do this, which is kind of complex from time to time. Yeah, but for example current uh solutions like cursor when windsurf and uh as i mentioned
rule code it can browse your workspace for files for anything for for the context but the main
problem is is how you describe your initial task. You can be
so detailed, you can describe in details and actually say what you need to be done, what
are the requirements and maybe it will produce some kind of good result because you gave some context first but for if you give
a complex task but in a small world I don't know
it will start to research in his own way. Like you cannot control anything. So
AI will done as he...
Like you say something and he
And he misunderstands you.
But if AI misunderstood your initial tasks, it can be just crap and you can spend a lot of money, a lot of time just to double check everything.
And it's just real bullshit.
And you need to start again.
All right.
So thanks for your insights.
So in terms of entire applications built only with AI assistance, have you already seen
some successful examples and what would be the risk if someone would, you know, developed application
just with the AI?
I've seen only bad examples.
For example, some developers start to start his own project and he spends
like three days in production and it's production already.
But after some time, random people research security flows, some bugs, and they can,
for example, if you have some economy in your application and your project,
it can be just a project and like no money left in your project but about what about
good applications or projects i have seen a few and it's if it's some kind of, you know, no economy, no market application, like, I don't know, your calculator or timer or maybe some mindful kind of app where you just need to meditate, you know, just some practically useful application that you can sell in, for example, on the App Store or Google google play and you don't need to have some kind
of subscription model then maybe you can produce some good results because it's just boilerplate
application but if you need some kind of complex logic i i don't see any good results
I don't see any good results.
Actually, I'm going to share some piece of confidential information right now.
Just because in Gear Foundation, there is one game under development right now for VARA.
And the point is that this game was originally totally built by the AI.
He's our guy who was writing some prompts and context to it. And it was good.
It's not production ready, it will not be like this.
But the point that some proof of concept, some MVP of the game
was totally built by the AI with the images, with the sounds, blah, blah, blah.
And it has risks just because there is a complex logic, for example, interacting with the chain, which needs to be verified by the human.
It had a lot of performance issues just because, let's say, it was re-instantiating sounds every time during the game.
So we were forced to rewrite it somehow to boost the performance, and we got something 10x, let's say.
All right, so we have heard it's possible.
And then also from Vadim's side that he hasn't experienced a really good level.
So it seems it can be both.
I think it's just a good...
You can bootstrap your development if you're a good developer, but if you don't know how to do some system design or some architecture design, it's just impossible to build an end product and push it to the market because it will have some security flaws anyway.
Alright, so next question would be how you guys handle AI generated code in code reviews
and what kind of red flags do you look for?
Yeah, I think that at the moment that you can say that some photos written by the AI
is already the red flag and you don't need to merge it.
Yes, because if it could be found somehow that it's just AI generation,
so it's a bad code actually, because without any review, as we already mentioned
several times, you cannot produce some production ready solution.
So nobody tries to submit your AI-generated code just because we had experience with the bug bounty program
in HackingProof, and we still have.
And the point is that mostly these reviews
by the white headers, they're AI-generated,
and they have no any context just because,
oh, you see there is some code in your code base, it could be hacked like this, but actually
nobody uses this code like this, so it wouldn't be abused in any manner.
All right.
Yeah, but I actually don't know how to spot an AI generated code. All right.
Yeah, but I actually don't know how to spot an AI generated code.
It just looks as usual.
If it's a bad code, you just see it.
So if we are talking junior developers versus AI tools,
are there new developers which are learning to code or just learning to prompt?
What do you think?
Yeah, I think it depends on person, because if you use AI assistance, it's good for you,
but you need to learn even from the
AI assistance.
So if you're just learning how to ask GPT, it's good for you.
You may use GPT, but you cannot use your compiler.
You cannot use your hands to write something because time to time you happen to be without
the internet, for example, or you need some expertise, you need to be able to design something, you need to be able to architect some
solutions, some systems, even on a small level.
If you're a junior developer, it's just less challenging for you to come up with some
solution, but anyway, it's all about finding something that's going to solve your initial task.
So if you're only learning how to prompt, I do not believe that you're going to be a successful developer.
Yeah, if you're a human being, it's just a lot more flexible to solve tasks, solve problems.
just a lot more flexible to solve tasks, solve problems and if you just use AI to solve problems
I don't think it's just really useful because you cannot teach AI what to do and what can be a And AI knows only every solution we have in the world, but it cannot understand how to solve new problems or to solve new tasks.
And it's better to be a human being and to learn something new and just train your brain and work.
Learn something new and just train your brain and work.
So I actually have a question which follows this topic.
But since I heard as you just respond to this one, so maybe I know your answer.
But do you think there's going to be a future where prompt engineering becomes more valuable than traditional coding skills?
Because AI is used on a daily basis.
We see a lot of these prompt projects.
So do you think it could get to the level where it's going to be better than some coding skills?
at to the level where it's going to be better than some coding skills.
I think it never going to happen until the Skynet begins this world actually.
Yeah, I think we, I don't know what can be in the future, but I think we
can't get rid of developers
because they're just too smart.
Comparing to the AI, not smart, but maybe more, as I said, flexible and they just
have some brains and they can solve tasks.
And if you know how to prompt.
If you know how to prompt engineer, it is useful in some way and can be useful.
Maybe on a more, more simple tasks, you know, but if you start, some people need
to develop AI, for example, and they need to be smart, they need to be developers, they need to understand all the complex stuff.
uh looking at these current trends which developer roles are actually at the most risk
of being replaced by ai tools yeah i think it i think
okay i i think maybe some kind of front-end devs or some artist work like designers or maybe some kind of back-end
but it's just security flows everywhere and i think some kind of simple tasks you can build
we have a lot of back-end developers front-end developers and we have a lot of boilerplate templates and I can do this
stuff like you can have a lot I can I see a lot of projects that use AI to just push
your idea to the code like you can build your website or web location just
once simple prompt and you can use it after. Like you can wait 15 minutes and
you get a useful website, web page, front-end, back-end. And in general it is a simple task, but, but backend developers and frontend developers
will persist.
Just AI will work on some simple stuff, like simple web page landing page.
Yeah. on some simple stuff like simple web page landing page yeah yeah first of all i think that i'm being
kind of interruptive today so sorry about it yeah and related to the question uh i think the basis
of building some uh coders teams or some companies hiring some stuff hasn't changed at all just because I think from the AI
growth will be in heart let's say that kind of developers that are lazy that are like unresponsive
that are non-productive just because for example speaking of some kind of beginners, for example, junior, you may teach junior
to be middle, you may teach middle to be senior, but you cannot force senior to be productive
just because he is lazy or something like this.
It's all about humanity as a classic in our society and the buildings of some companies. So you need some system, some hierarchy, maybe, maybe something,
some some channels to communicate with each other.
And then AIs will only help to boost your team, while some bad guys,
let's say like this, some nonproductive and lazy, lazy workers,
they're going to be like, you to be slashed by the AI
just because you don't need a worker that is less productive
than some assistant that could be triggered by, for example,
the guy from the marketing or something like this.
So you kind of mentioned that the AI can only help you with the code and stuff.
It already knows.
So can it actually understand for complex blockchain development?
Can AI tools understand the nonsense of smart contract security, gas optimization, etc.?
drug security, gas optimization, etc.
Can it understand this more complex blockchain?
I think in general, yes, because we have a lot of documentation
stuff on blockchains and about smart contracts and all the frameworks.
So it could possibly bootstrap
as I said your development and you can use it as the AI researcher so if you don't want to
read all of this stuff like all the complex stuff it can just be your teacher and tell you in like in more simple way what can be underput
and about security in smart contracts yeah if you know how to use an ai tool
it can be done everything can be done i think just you need to double check everything double
check his words double check then the security output and I think it just you can spend a lot
of time just by double checking everything after AI So you just need to use it more carefully.
Actually, I use AI all the time when I'm trying to research something inside blockchain,
like about other projects, some of our concurrence and stuff.
So for example, explain me how solana works explain me how this
works or this roller blah blah blah and um it does great it does great research just because there is
a wiki for most of the pro of the projects in the sphere there is a white paper and it could analyze
and somehow sum up um with all of these explainings. But in terms of writing contracts and writing some programs, it's kind of weak.
You know, so if you speak about the client side, some AI system may produce
some clients and the code for them, which is not existent, as I mentioned before,
or which is outdated just because there is a new version of library and it all works differently, for example, right now.
Or if you develop not a client, but like some actual program, for example, in some primitive environment, for example, EVM and Solidity.
I say that Solidity is kind of primitive just because it has a lot of limitations.
It's kind of weak in terms of solving some complex stuff.
And the point is it helps you to maybe somehow optimize the gas costs for you,
just because there is a lot of sources on the internet where experienced guys explain you
how to write optimized code in Solidity,
but it cannot help you to not produce or solve some bug holes inside your programs
just because it may not know about it.
Yeah, I have a thought about AI just can do good research, but this knowledge is just too outdated for any kind of real work, especially in the blockchain space, for example, because you have something new every day and every library, every framework is just updating constantly and AI just cannot update itself.
It can browse some kind of wiki, some kind of examples, but general knowledge is just too outdated.
It can think like a human, it can read, it can produce some kind of real text documentation, wiki, but for the code, it's just outdated and you need to check manually after any kind of code output.
So I have a little bit controversial question, which is, so do you think AI tools eventually should they write, test and deploy entire applications without any human intervention or they shouldn't?
I have seen Anthropic team and ChatGPT open AI, they released some kind of
Autonomic Code Assistant and it can solve some issues on your GitHub or it can maybe
write something from scratch but if it's a complex solution, like a mixture of different AI models,
and split into small tasks, maybe, maybe it can be useful.
But I think any team will spend a lot of resources and a lot of money and time on testing this code this output
and if it's just bug free if it doesn't have any security flaws no bug free yeah and i think it just
back here and I think it just makes things a lot harder just to push it to the production.
It's more easy to write code yourself and you know what you write and you know what you do but
But it's just unbelievably hard to understand the produce of an AI and what he thinks of what is it for.
Like, it's just mind blowing just to check after the AI.
And you can spend a lot of human resources just to understand the result.
extent the result.
Yeah, actually, I feel pretty bad because I haven't saved the post I've seen recently,
maybe two or three days ago on Twitter, on X, sorry.
Um, it was about some web 2 business already existing and with a lot of customers, and they just share screens with some AI assistants that
was promising to do not roll up something in production without human intervention.
And the point is, it avoided this rule, and this AI just ruined actual database for some
existing business on their production. So I think it's unbelievable and should never happen for anybody else.
Just because AIs couldn't be that confident about what they're doing
just because they're scripts.
They know how to write code.
They know how to write tests for the code,
but they may miss something in this test.
It's first of all, like the reason why it shouldn't
be deployed without a human and moreover there is a thing of being paid uh so any um you know any
posters any contract or blockchain it requires some payments and i wouldn't give the power of being, you know, responsible for some of my
wallets to the AI assistant just because it's my real money and I don't want to waste it due to
some bugs or glitches or some random stuff happen to this AI. Yeah, AI is just uncontrollable little kid that cannot understand you well or he doesn't hear you well and just do some kind of, uh, deleting the database, for example, and he deletes eventually and you say, oh,
shit, you deleted all the databases.
And then he just said, oh yeah, you're right.
I deleted it.
I'm sorry.
And that's it.
That's how you work with AI right now.
how we work with AI right now.
All right.
So I think we covered quite a lot of questions.
We actually have someone here in the audience requesting the mic.
So let me add Big Ken as a speaker so he can ask you some of his questions.
Hi, can you hear us?
Yeah, Jim, Jim, can you hear me?
So do you have some questions for the devs?
Yeah, actually, I was here when I was listening to either Vadim or Dimitri,
that I was talking about, like like verifying the output of ai
so like um how do you verify yours because like how do you verify like when you get like something
from ai and then you want to like get to see that if it is actually true like how do you verify um ai
out outputs yeah i think it's all up to your you your code review expertise or your just common knowledge if it works like
So, for example, if you have asked to write some code, you just read it and verify as
it was done by some of your co-workers.
So you're just trying to be more precise and more, you know, you request a lot from this quote and you ask
it to be perfect from your points of view and your programming like experience.
And if it's some kind of research, you just ask, for example, give you some links to this,
where did you get this data?
Or you just trying to double Google it, just google without any ai just because if you could
find the same answers in the google it means it's more likely something like real real world
real world change in this output let's say um okay okay are you done
um okay okay okay are you done
sorry sorry dimitri are you done yeah i'm doing fine thank you how about you okay
yeah i'm good i'm good so um actually like instead of going through the stress do i when i was
filling the uh ambassador from and then i think there was
a place for like um where you can suggest um like um partnership or something like that so i just i
don't know i came across this um this um this layer this project and what they actually do is
just something like the provenance provenance layer for ai agents where it can verify um ai
output for projects do i and then instruct me that okay actually we can have
something like a partnership or if um vara is actually interested so that's what i just came
to like share that okay instead of going through the stress of double checking on google you can
just have something like a provenance layer that can just help to verify ai output so i think that's
all i just came to share so thanks for giving me the mic.
All right. Thank you so much for joining. And always, if you want to introduce anything, feel free to share in the Telegram or Discord. I have also one more question here from Vishal,
One more question here from Vishal, who actually submitted question under the comments.
He was asking you guys about AI DevTools to work productively.
I know we mentioned this at this beginning, but he's asking about feedback for Windsurf and Cursor.
So do you have any experience with this?
Could you share some feedback with him?
I haven't used Curvesurf.
I used Winsurf a few times.
It felt very good, very useful tool, but it just unconvenient for me.
I use usually the VS code, the usual VS code and
Rue code extension and you can use any AI provider, like you can choose any
model, you can choose your settings and just more flexible for your tasks and
yeah that's why I find it very useful for me.
But in speaks of Windsurf or Courser, maybe it's more useful to build your own project from the start,
because you are familiar with the Courser and you just use it for your own project. But in some kind of public repository, I just use VS code because it's more convenient again.
And I know how to use it and I know what's there and just my kind of workflow.
All right.
So guys, I think we covered quite a lot of questions today. So I would like
to thank everyone who joined who listened to this X-Space. It's also recorded for those
who couldn't join. And I will be also creating a recap thread on Twitter. So for those who
prefer to read instead of listen, they can also do that.
Thank you both
for joining Vadim and Dima.
And I know this
X-Base was a little bit different
than we do. So let us
know in the comments what kind of topics
you would like for the next one.
And thank you guys so much.
Thank you, Claire. Thank you, Claire.
Thank you, listeners.
Thank you, Dimitri.
It was a great topic.
I think I sometimes I reflected on my feelings about AI.
It's so nice talking to you all, all the time.
So waiting for the next one.
Thanks. See you. Bye-bye. Thanks, guys. Bye. all all the time so waiting for the next one see you thanks thanks guys bye